Background Recently, the European Union grappled with an energy crisis, primarily due to the...
Generative AI in the Spotlight
Background & Stages
Generative AI is not new. With a few notable exceptions, most of the technologies we’re seeing today have existed for several years. However, it is the combination of various undercurrents that has now made it feasible to commercialize generative models and integrate them into everyday applications. Although the field still faces numerous obstacles, it is anticipated that the demand for generative AI will expand significantly in 2023 and beyond.
Since the early 2010s, artificial intelligence as a field has been going through a period of active growth and development:
- In 2014, Generative AI gained popularity with the introduction of Generative Adversarial Networks (GANs), a deep learning architecture capable of producing convincing images, such as faces, from random input data. GANs and Variational Autoencoders (VAEs) went on to inspire the creation of deep fakes, a technique for altering images and videos by swapping faces.
- Then, in 2017, the transformer architecture was introduced, which is the basis for large language models such as GPT-3, LaMDA, and Gopher. OpenAI's DALL-E utilized an earlier transformer for generating images from text. GPT-3 made the headlines for writing full articles.
- In 2021, OpenAI introduced a game-changing technique known as Contrastive Language-Image Pre-training (CLIP). This method became crucial in the development of text-to-image generators, as it effectively learns shared representations between images and text. CLIP, along with diffusion, a deep learning technique for image generation, was used in OpenAI's DALLE-2 to produce remarkable high-resolution generated images.
- During 2022, advancements in algorithms, the use of larger models, and access to larger datasets led to further improvements in the output of generative models. This resulted in better images, more sophisticated software code, and longer coherent text generated by these models.
OpenAI's new generative AI-powered chatbot ChatGPT is a game-changer:
- It is a natural language processing tool that can create content, images and even code on demand via conversations with a chatbot. The AI-driven tool is built on OpenAI's GPT-3 family of large language models.
- Instagram and Spotify each took approximately 2.5 months and 5 months, respectively, to reach 1 million users. In contrast, ChatGPT reached 1 million users just 5 days after its launch in November 2022.
- By January 2023, it had reached 100 million monthly active users after only 2 months, outpacing TikTok, which took 9 months to reach this milestone.
AI Startups’ emergence during 2022
Beyond the buzz, a multitude of startups have emerged and are rapidly expanding the use of generative AI, covering everything from search engines to motion capture animation. Despite the proliferation of these startups, most have received minimal equity funding, indicating a significant opportunity for investors to flock to this potentially groundbreaking technology. 2022 was a landmark year for investment in generative AI startups, with over $2.6 billion in equity funding raised through 110 deals.
Source: CB Insights
The Competition
Currently, the biggest beneficiaries of generative AI are large tech companies with vast amounts of data, computing power, and established user reach. For instance, Microsoft is leveraging its cloud infrastructure, its access to OpenAI's technology, and its market for office and creativity tools to bring the power of generative models to its users.
In January 2023, Microsoft invested an estimated $10 billion in OpenAI, valuing the company at $29 billion. The company first invested $1 billion in OpenAI in 2019, and then more in a 2021 funding round when the startup was working closely with Azure, Microsoft's cloud service. This latest investment solidified Microsoft's position as the exclusive cloud computing provider for OpenAI. Along with this latest investment, Microsoft announced the new AI-powered Bing search engine and Edge browser. Through its Bing platform, Microsoft currently controls a 3% share of the global search market, with the potential to earn billions of dollars in advertising revenue through even modest increases. However, the competition is not just about search and advertising dollars, but also about the source of that business and how it impacts the competition, particularly Google. As mentioned by Satya Nadella, Microsoft CEO, “I hope with our innovation they will definitely want to come out and show that they can dance. I want people to know that we made them (Google) dance.”
Google's highly anticipated AI chatbot tool, Bard, which has yet to be made available to the public, has come under criticism for a flawed response it generated during demonstration:
A GIF shared by Google shows Bard answering the question: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” Bard offers three bullet points in return, including one that states that the telescope “took the very first pictures of a planet outside of our own solar system.”.
However, a number of astronomers on Twitter pointed out that this is incorrect and that the first image of an exoplanet was taken in 2004 — as stated here on NASA’s website. The mistake highlights the biggest problem of using AI chatbots to replace search engines - they make stuff up.
Google's substantial investments in AI are primarily driven by the challenges posed by competitors. The intense competition in the search market is likely to take a toll on Google's profitability, as it could result in a loss of ad spend to Bing and the added costs of running AI-powered search engines compared to traditional ones. While Microsoft may see modest gains in search market share, which would bring in billions of dollars in advertising revenue, losing market share for Google can have a significant impact. The search advertising revenue for the December 2022 quarter accounted for 56% of Alphabet's total revenue.
But there is more to this war than search: Microsoft's initial investment in OpenAI, dating back to 2019, was driven by aspirations that go beyond just chatbots. OpenAI technology has the potential to be integrated into the company's productivity tools, such as Outlook and Office 365, in the form of digital assistants, AI-suggested PowerPoint content and formatting, email sorting and suggested replies based on past interactions, suggested next best actions, and more. The integration of OpenAI and ChatGPT technology into Azure has the potential to attract potential cloud customers away from Amazon’s AWS and Google.
Recently, Microsoft has revealed its plans to incorporate ChatGPT into a premium version of Microsoft Teams. The chatbot will provide tailored meeting templates, create meeting notes, summarize content that is relevant to individual users, and translate notes and transcripts into 40 different languages. Additionally, ChatGPT will be able to condense meetings, calls, and webinars into chapters, provide them with titles, and highlight important names and information. This integration has the potential to revolutionize the way people participate in meetings, allowing them to more efficiently consume content that is relevant to their role while reducing the need for attendance at multiple meetings.
Google also aspires for AI integration beyond its search feature. In a recent earnings call, CEO Sundar Pichai announced Google's plans to incorporate generative AI into a multitude of its products, ranging from Google Docs to Gmail.
Competition in this field is continuing to intensify. With companies such as Chinese tech giant Baidu unveiling their own version of the "Earnie Bot", it's expected that even more players, ranging from established to startup companies, may enter the arena.
A newcomer to the field is Meta, who recently announced it was releasing to researchers a new large language model, the core software of a new artificial intelligence system, heating up an AI arms race as Big Tech companies rush to integrate the technology into their products and impress investors. However, Meta may be at a disadvantage since it is not a cloud provider. On the other hand, Meta has access to exceptionally interesting data and it may be able to leverage the technology better than all other competitors due to its constant social engagement.
Grove Ventures Generative AI Approach
Our notion as of now is that current investments in the generative AI field do not align with long term venture capital goals. It is most likely that the majority of the companies founded in the AI sector and invested in, are not capable of achieving the necessary growth to become billion-dollar enterprises. They are, however, much better suited for either creating small non-venture backable businesses or a value-added addition to market offerings of a current incumbent. Grove opt for sustainable B2B businesses with an ability to scale and create category-defining companies.
We doubt the potential success of numerous companies in the generative AI field, as there is intense competition to be expected. Most emerging companies are using similar algorithms, which they have not developed themselves. While they may have made slight adjustments to these algorithms, they can be accessed by anyone for a low cost and with minimal coding experience.
Additionally, these algorithms are still far from perfect when it comes to broad topics, and it may take some time for a significant portion of the population to adopt them. On top of that, many existing tools and platforms have already started to introduce their own offerings. It is possible that this trend may only serve to strengthen existing market players, leaving only a small portion of the market available for new startups to enter.
The initial phases of a technology stack for generative AI are beginning to take shape, with many new startups entering the market to create foundation models, develop AI-native applications, and establish infrastructure and tooling. We see infrastructure players as the primary beneficiaries of this market, as they have captured most of the funding flowing through the stack. Most model providers, though responsible for the very existence of this market, are currently “loss leaders” who subsidize the use of the models as the market develops and attracts users. Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins.
To put it differently, the companies that are adding the most value, such as those involved in training generative AI models and implementing them in novel applications, might not be the ultimate beneficiaries of Generative AI. It is much more challenging to anticipate what will transpire in the future. Nonetheless, our view is that the crucial factor to consider is determining which elements of the stack are truly unique and can be adequately safeguarded.
The generative AI tech stack can be divided into three layers at this point:
- Applications that integrate models into a user-facing product (Examples: Midjourney)
- Foundation Models that power AI products (Examples: OpenAI, Stable Diffusion).
- Infrastructure vendors (cloud platforms and hardware manufacturers) that run training and inference workloads for generative AI models (Examples: Azure, Nvidia)
Although the initial phase of generative AI applications is beginning to gain traction, these apps are facing challenges with retaining users and standing out from competitors. It is not yet apparent that selling end-user apps is the sole or most effective method of creating a sustainable generative AI enterprise.
Generative AI owes its existence to model providers such as Google, OpenAI, and Stability, whose groundbreaking research and engineering efforts made it possible. Thanks to the creation of innovative model architectures and the scaling of training pipelines, the astonishing capabilities of large language models (LLMs) and image-generation models are available to all of us. Despite the widespread usage and buzz surrounding these models, the revenue generated by these companies is still relatively modest.
Nevertheless, it appears that infrastructure vendors have a hand in nearly everything and are the ones reaping the benefits. In generative AI, almost everything involves passing through a cloud-hosted GPU (or TPU) at some point. The most disruptive computing technology is now more dependent on computational resources than ever before. As a result, a considerable portion of the revenue generated in the generative AI market ultimately goes to infrastructure firms. It is reasonable to estimate that more than 20% of total revenue in generative AI today is channeled towards cloud providers. Additionally, startups that train their own models have obtained billions of dollars in venture capital, with most of it usually being spent on cloud providers. Furthermore, many public tech companies invest hundreds of millions of dollars annually in model training, either through external cloud providers or by working directly with hardware manufacturers.
Consequently, infrastructure is a profitable, enduring, and seemingly secure layer in the stack. Recently, there were reports indicating that the development of training models and inferencing exclusively for OpenAI's ChatGPT can demand as much as 10,000 GPUs or possibly more, depending on other AI applications and variations. While this is good news for Nvidia and AMD, an excessive demand for these GPUs could potentially lead to a shortage of chips, which would ultimately lead to increase in adoption costs. Despite the steep cost, analysts suggest that it may be necessary to pay the price of doing business in a rapidly growing market that requires a substantial investment but promises significant returns. But simply having faster GPUs readily available is not sufficient to sustain offerings such as ChatGPT. Companies must also enhance other components of their infrastructure to effectively operate these AI services. In fact, companies must upgrade not only their computational capacity, but also their networking and power infrastructure.
Grove has been a long time investor in infrastructure level companies and our portfolio is anticipated to benefit from the generative AI sector, with their solutions enabling the use of generative AI as the preferred and easily attainable technology. For example:
- NeuroBlade’s Hardware Enhanced Query System (HEQS) will define a new category of Hyper Compute for Analytics, and deliver dramatically faster data analytics. As the amounts of data gathered in enterprises explode, while on the other hand harvesting the compute power to process it becomes harder, NeuroBlade’s HEQS unlocks new boundaries of performance and power consumption, allowing faster time to insights.
- Teramount develops a patented optical connectivity solution for connecting optics to silicon, to increase data transfer speed while reducing power, cost and size. Teramount’s solutions are an enabler of the ever-growing demand for high-speed data transfer in datacom and telecom applications. Optical connectivity is the ultimate solution for high-speed data transfer. Without it, the existing copper-based interconnect technologies will reach their performance limit.
- Unifabrix develops innovative technologies for Cloud, Telco and Enterprise markets, that dynamically scale, accelerate and transform heterogeneous data centers into fluid workload-centric infrastructures. UnifabriX’s Smart Memory Node enables data center operators to unleash the speed, density, and scale of their infrastructure.
Generative AI is a game-changer. We are all adapting to the rules in real-time, and there is an enormous amount of value that will be unlocked. The upcoming technology landscape will look vastly different, and we are excited to be a part of it!