Google is worried that generative artificial intelligence isn’t as accurate or as useful as currently advertised.
There have been major disagreements among internal Google engineers about whether this service is additive at all.
Google product managers, designers, and engineers have used a chat forum to openly debate the AI tool's effectiveness and utility, with some questioning whether the enormous resources going into development are worth it.
The problem with a great deal of the data they are using to build the software is they cannot independently verify whether it is true or not.
The AI systems are trained on massive amounts of text that form the building blocks of chatbots, but this text is just idling on the internet and that doesn’t mean it's accurate.
Last month, Google unveiled its most ambitious update yet: connecting Bard to its most popular services, such as Gmail, Maps, Docs, and YouTube.
However, rolling out these new updates has coincided with a drove of new complaints about the tool generating made-up facts and giving potentially dangerous advice.
Google’s thousands of low-paid contractors training Bard use convoluted instructions that they’re asked to complete in minutes.
In my opinion, Google is attempting to roll out this product as fast as possible without really focusing on the quality.
Inside and outside the company, the internet-search giant has been criticized for providing low-quality information in a race to keep up with the competition, while brushing aside ethical concerns.
For Google, ensuring the success of its Bard AI chatbot is of utmost importance. The company is far and away the leader in search, its financial lifeblood generates about 80% of parent company Alphabet’s revenue.
At Bard’s launch, the company was upfront about its limitations, including the possibility for the AI tool to generate convincing-sounding lies.
Google takes advantage of an army of underpaid and overworked contractors in order to refine Bard’s responses and I believe that is an extremely rash strategy.
Executives also must consider the consequences of the enormous costs needed to maintain large language models.
Google has reacted by downplaying fears, lack of usefulness, and the sheer fact that they might not have any idea what they are doing.
We are in unknown territory now with unproven technology and Bard could end of becoming a giant bust.
When is the point where engineers egging each other on start to question the core project? Remember, these engineers have monetary and personal incentive to continue with this because they are getting paid around half a million dollars per year.
If this project ends in humiliation for Google, they just move on, take the next engineering job, and Google writes down the losses.
The beginning of 2023 was beset with AI euphoria only to move into the latter half of 2023 where investors realize that it would take a while for any of this technology to meaningfully boost revenue.
Questioning the idea in itself is also another downgrade to AI momentum, and investors need to be cautious right now instead of throwing money at whatever sticks.
At some point, management will need to look at this project closer and not make this only about catching up with Microsoft’s ChatGPT.
Next year will go a long way to prove whether this technology is legitimate or not and we stay on a knife edge to see how it plays out. My bet is nothing really hits until later in the year.
Even if it doesn’t go exactly to plan, I do believe there are some revenue-boosting applications from this technology in the long term so it’s not exactly all negative for Google.
It could be that Google realizes that using the best data coupled with the best engineers is a better combination than what they are doing with Bard.