January 24, 2024 – Reading time: 10 minutes
According to a recent study by Bloomberg Intelligence, generative AI will reach a respectable market size by 2032. This involves the impressive sum of €1.2 trillion. The market is currently estimated to be worth around €61 billion. This size was expected for 2023 when the study was published in June 2023. An average annual growth rate of well over 40% is therefore expected here. It should be explicitly mentioned once again that this is exclusively the market for generative AI. While other AI technologies essentially analyze input and then classify it, for example, generative AI focuses on the creation of new, creative content.
Whether the results are actually creative or more repetitive in nature is something I will leave aside at this point. A key role in the field of generative AI is played here by the large language models, which caused a furor not only because they made AI tangible for everyone by means of a chatbot, but also because the experience is quite impressive. The study therefore only looks at part of the AI market and this alone is expected to reach the aforementioned 1.2 trillion euros by 2032. Perhaps the following comparison will help you as much as it did me: the sum is roughly equivalent to the combined turnover of Apple, Microsoft, Meta, Google and AWS in 2022.
Given these dimensions, who doesn’t ask themselves how they can get a slice of the pie for their own business? Or do you not even ask yourself this question because you assume that this increase in turnover will only be shared by the aforementioned tech giants?
Be optimistic. The giants may be the hosts, but we are all invited to the party. And it’s certainly not for charity. No, without guests there is simply no atmosphere. Because AI is not an end in itself, as fascinating as many of the corresponding technologies are, without applications, products or services that utilize the new possibilities of generative AI, the market is actually much smaller, if it exists at all. In fact, after a rapid boom, the last year has also seen stagnating and sometimes declining user numbers on the famous ChatGPT platform. However, this does not mean that the potential of generative AI has already evaporated and that it was all just hot air.
A clear distinction should be made between the ChatGPT application and the underlying technology or the corresponding language model. Even if there is currently a paid version with a more powerful model, we can confidently assume that this product will not be the biggest star in the AI firmament. At least not in 2032, as the current pricing model would require 5 billion paying customers to reach a market size of €1.2 trillion. Rather, ChatGPT illustrates the power of large language model (LLM) technology in a simple way and inspires users to come up with ideas of what else can be done with it.
And this is precisely where every company can secure a slice of the AI pie if it is able to identify and implement the right application scenarios for (generative) AI. Unless ChatGPT does reach 5 billion users, in which case the expected market is already fully occupied and everyone else is left empty-handed. I think you now understand why we simply don’t allow ChatGPT this enormous increase in user numbers over the next few years.
A long-term AI boom will only come about through the integration of this technology into products and processes. Let’s take a look at the pie together (in Figure 1) and get closer to the answer to the important question of how to get a slice of it.
Figure 1: Market share for generative AI in 2032 according to a study by Bloomberg Intelligence.
Who would have guessed otherwise, but the hosts share the biggest bite. This refers to the 36% of the market accounted for by the hardware infrastructure required to develop and train generative AI. German technology companies will probably have less to gain here. But now things get much more exciting when we take a closer look at the second largest piece. With 21% or around €250 billion, this is the market for (software) applications that make use of generative AI. These can be personal digital assistants, for example, as well as software solutions that analyze requirements specifications and compare them with system specifications or automatically create standardized reports from a lot of information. In the study, the market for “inference devices” includes hardware products for machine vision and communication that use generative AI for this purpose. With a share of 13% or € 160 billion, this area also leaves room for own product ideas. Taken together, the market for providers who integrate generative AI into their software or hardware products accounts for a good third. In my view, however, the much more important message is that the market will only grow through meaningful and useful applications.
We will look at the specific application possibilities in a little more detail in a moment. Let’s first ask ourselves once again what needs to be done from the perspective of a technology company in order to be able to participate in this part of the market. Make your service, software or system better by integrating an interface to a generative AI in such a way that the user receives a clear advantage, e.g. through new or more intuitive functions.
Your innovation roadmap is already packed with great new products and none of them need generative AI? Then I suggest you make your working life easier with the help of generative AI by building yourself a suitable solution to automate recurring tasks. This type of use of generative AI can also generate considerable added value in the company by supporting project managers, requirements engineers, test managers, etc. in their tasks. One thing is particularly important here: it is not enough to simply provide access to the ChatGPT. Instead, it is advisable to use an individual application that accesses a secure LLM service in the background. There are two main reasons for this: no technology company should willingly provide critical information from development to another company via chatbot on a permanent basis if the latter also uses the data to improve the model.
Otherwise, such confidential information from development could be mapped in the model and thus also be available to other companies, at least indirectly through the use of the model. Clearly not a good way to protect intellectual property. The second reason, however, should not be overlooked, too. If a language model is to be used sensibly, for example to improve the efficiency of certain tasks, then its use must also be standardized to some extent. This is the only way to achieve reliable and good results that do not depend on the user’s skill in using the language model. In addition, its use should be integrated into existing workflows and tools to enable quality assurance, for example.
But enough about the market and theory. Let’s move on to practice and the promised applications, which may also be relevant for you in varying degrees of modification or may even already be in use.
Let’s first look at the concept of generative AIs: language models such as GPT 3.x or 4.x fall under the type of Large Language Model, or LLM for short. Behind this is a huge amount of data that enables the AI to formulate words, sentences and even meaningful answers. This AI already has a wide range of possible applications and functions. However, we now want to focus on specific tasks and functions and only use these. To do this, we are creating a user interface that is tailored to a specific use case or specific tasks. It is the interface between humans and AI, instead of the familiar chat interface. This limits the interaction between humans and AI, which has clear advantages. The tasks are carried out in a standardized way and integration into the work process can succeed smoothly. Among other things, this user interface contains ready-made formulations and logical processes that ensure that the task is standardized, and that the best possible quality of results is achieved.
If you want to invest even more resources in your proprietary AI, you can train it further to better specialize it to your own needs and tasks. However, this specialization requires significantly more time, a large amount of data and expertise in order to be implemented effectively. For many tasks, however, this specialization is not necessary.
In the first example, we use an AI to create reports from data records, such as Excel spreadsheets, emails, and other digital documents. To do this, only the relevant data records need to be transferred to the AI via the interface. The AI then processes the information contained therein and produces the desired results in the predefined format. For example, the result can be created in the form of an Excel table or in the written form of a report in which only the most important figures are noted. No interaction between user and AI in the form of a chat is necessary. The tasks are predefined and can be called up via the corresponding interface so that the reliability of the results remains unaffected by the user.
A second example of an AI is a relatively time-consuming task: estimating the amount of work involved in new projects. In project management in particular, this is a recurring task that takes a lot of time each time. However, this task can be solved very efficiently by an AI. It is important to specify the individual work packages in as much detail as possible. With this information, the AI now takes over the estimation for the individual work packages and can output desired summaries of effort or other calculations at the same time. This is where another advantage of standardized communication with a language model comes into play. The accuracy of the results and possible deviations can be determined and checked. Such an application can only be used sensibly if it meets the requirements. By the way, I find it particularly exciting to compare different estimates of the same task but under different conditions. For a programming task, for example, you can specify and vary the frameworks to be used as well as the developer’s experience. This also helps to check the plausibility of the AI-based estimates. Of course, there are many other applications for AI. A few years ago, we started a series of articles in Projektmagazin describing several applications in project management. The Harvard Business Review also published an article describing the impact in project management.
INVENSITY has been advising companies on the use of AI for more than 6 years. In fact, LLMs have had a high impact and caused a paradigm shift. Before the boom of large LLMs, a lot of proprietary data was almost always necessary to train meaningful models. Thanks to the availability of large language models, it has now become much easier to implement individual AI applications in companies.