Someone at Google thinks open-source AI is winning the race


A leaked document from an anonymous Google employee discusses the role of open-source AI projects and their potential impact on industry giants like Google and OpenAI.

Please note: This document likely represents a single opinion within Google. There is no indication that the content of the paper will influence Google’s business strategy, and it is just one of many perspectives. Nevertheless, it is interesting to see what is being discussed within Google about open-source AI.

The document highlights the impressive achievements of the open-source community recently, and questions whether the major players can maintain a competitive edge in this fast-moving space. The document was leaked on Discord and verified by Semianalysis.

Open-source AI outpacing the big players

“We have no moat. And neither does OpenAI,” is a key statement in the paper. Given the rapid progress of the open-source community, neither Google nor OpenAI could gain a sustainable competitive advantage.


The open-source AI community has made remarkable progress recently, the paper says. Large Language Models (LLMs) can now run on smartphones, and personalized AI systems can be adapted to laptops within hours or trained at impressive speeds.

These developments have been made possible by the ability of the open-source community to rapidly iterate and improve AI models, the paper says.

“The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”

The “Stable Diffusion Moment” of large language models

Google’s models still have a small lead, the paper says, but that lead is rapidly closing. The open-source models are “pound-for-pound more capable,” as well as faster, easier to customize, and more privacy-friendly.

According to the paper, an important factor contributing to this rapid innovation is low-rank adaptation (LoRA), a method for fine-tuning AI models quickly and cheaply. LoRA, combined with the LLaMA leak, has enabled a flood of ideas and iterations from individuals and institutions around the world that have quickly outpaced the major players in the field, according to the paper.


washington post reports that Google’s AI chief, Jeff Dean, announced internally in February that Google’s AI researchers would no longer be allowed to share their work publicly as they had done in previous years. Research would only be shared if it was already part of a product. The reason for this change in strategy was reportedly the massive success of ChatGPT.

Since ChatGPT and GPT-4, OpenAI has published only rudimentary information about its models. Key information such as the size of the model, the exact training process, or the training data used has not been published by OpenAI for competitive reasons.

The leaked document refers specifically to progress in open-source AI since the leak of Meta’s LLaMA, the first major foundational AI model. It could therefore be seen as a direct response to Dean’s change in strategy. In any case, these two events illustrate the different currents within Google and also paint a picture of the complexity of the current AI landscape.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top