Core cloud tech wasn’t talked about much.
This week, 30,000 people got together in Las Vegas to hear about the newest and best things from Google Cloud. All they heard was creative AI all the time. Before anything else, Google Cloud is a cloud infrastructure and platform provider. You might have missed that in all the AI news if you didn’t know it.
Not to take away from what Google had on display, but the company didn’t really talk about its main business, except when it came to generative AI, just like Salesforce did at its moving road show in New York City last year.
Google released a bunch of AI improvements that will help users get the most out of the Gemini large language model (LLM) and make the whole platform more productive. It’s a good goal, and Google interspersed the news during the main keynote on Day 1 and the Developer Keynote the next day with a good number of demos to show how powerful these solutions are.
But a lot of them seemed too simple, even though they had to fit into a short opening. Most of the cases they used were from the Google ecosystem, even though most businesses store a lot of their data in places other than Google.
It seemed like some of the examples could have been done without AI. For example, during an e-commerce demo, the host called the vendor to finish an online deal. The goal was to show off how well a sales bot could communicate, but the buyer could have easily finished the step on the website.
But that doesn’t mean that generative AI can’t be useful. For example, it can be used to write code, analyze a large amount of content and query it, or ask questions of log data to figure out why a website went down. Also, the company created job and role-based agents to help individual developers, creative people, employees, and other people. These agents could use generative AI in real ways.
Instead of using the AI tools that Google and other vendors are making for their customers, they were talking about making their own based on Google’s models. But I got the impression that they weren’t taking into account many of the problems that might come up with a successful generative AI implementation. Even though they tried to make it sound easy, putting any kind of new technology into big businesses is really very hard.
It’s Not Easy To Make Big Changes.
Similar to other big steps forward in technology in the last 15 years, this one came with a lot of promises of benefits. This goes for mobile, the cloud, containerization, marketing automation, and more. But each of these advances adds a new level of complexity, and big businesses are slower to act than we think. AI seems like a much bigger deal than Google or, really, any of the big companies are letting on.
As we’ve seen with these other big changes in technology, they come with a lot of hype and a lot of disappointment. Even years after these new technologies have been out for a while, we’ve seen big companies that might should be using them still only play around with them or don’t use them at all.
It’s possible for companies to miss out on technological advances due to a lack of motivation, an outdated technology stack that makes it hard to adopt newer solutions, or a group of corporate skeptics that stop even the best intentions. These skeptics could be from legal, HR, IT, or other departments that refuse to accept real change for a variety of reasons, such as internal politics.
CEO of storage, governance, and security company Egnyte Vineet Jain sees two types of companies: those that have already made a big move to the cloud and will find it easier to use generative AI, and those that have been slower to adopt AI and will probably have a hard time.
It’s still early for many of the companies he talks to to think about how AI can help them because most of their tech is still on-premise. “We talk to many ‘late’ cloud users who haven’t started or are very early in their journey to become digital,” Jain told TechCrunch.
He said that AI could make these companies think hard about how to get ahead in the digital transformation race, but they might have a hard time if they start out so far behind. “These companies will have to solve those issues first, and then they can use AI once they have a fully developed model for data security and governance,” he said.
The facts always won.
Implementing these solutions sounds easy to big companies like Google, but as with all complex technology, what looks easy on the front end doesn’t always mean it’s easy on the back end. The data used to train Gemini and other big language models is still “garbage in, garbage out,” which is something I heard a lot this week. This is even more true when it comes to generative AI.
It all starts with data. If your data house isn’t in order, it will be hard to get it in order so that you can train the LLMs on your use case. Kashif Rahamatullah, a partner at Deloitte and head of the Google Cloud practice at his company, was mostly impressed by what Google announced this week. However, he did say that companies that don’t have clean data will have trouble putting generative AI solutions into place. At first, these talks might be about AI, but they quickly change the subject to “I need to fix my data, get it clean, and get it all in one place, or almost one place, before I start getting the real benefit out of generative AI,” Rahamatullah said.
From Google’s point of view, the company has created generative AI tools to make it easier for data workers to connect to data sources inside and outside of Google. A Google vice president and general manager for database, data analytics, and Looker told TechCrunch, “It’s really meant to speed up the data engineering teams by automating many of the very labor-intensive tasks that need to be done to move data and get it ready for these models.”
That should help companies that are further along in their digital change process connect and clean up their data. That being said, even with these tools Google has made available, it could be harder for companies like the ones Jain mentioned that haven’t taken real steps toward going digital.
Andy Thurai, an analyst at Constellation Research, says that all of that doesn’t even take into account the fact that AI has its own set of problems that go beyond just execution. These problems can happen whether the app is based on an existing model or when you’re trying to make a custom model. Thurai said, “Companies need to think about governance, liability, security, privacy, ethical and responsible use, and compliance when putting either solution into place.” All of that is important.
Also Read: The Deals That Openai Has Made With Companies Could Be Bad for Competitors
People from business, IT, development, and other fields who went to GCN this week may have been looking for what Google Cloud has planned next. But if they weren’t looking for AI or aren’t ready as a company yet, Google’s focus on AI at Sin City may have left them a little stunned. It might be a long time before businesses that aren’t very good with technology can fully utilize these tools, in addition to the pre-packaged solutions provided by Google and other companies.
What do you say about this story? Visit Parhlo World For more.