Developments in and around AI are currently happening so quickly that it is difficult to make any predictions at all. Rather than trying to predict the trends for 2024, the question is: Where is this astounding pace coming from?
Here are the current drivers, the consequences for the AI ecosystem and the AI failures that still need to be solved.
#1: Investment boom continues unabated
The AI sector is booming. True, companies have been investing a lot of money in technology for a long time now. However, the figures of past years pale in comparison to the amounts that tech companies have invested in start-ups and solutions in the last 12 months.
Even the major analyst firms are barely keeping up with their forecasts. For example, Gartner, back in August, forecasted that by 2026 more than US$10 billion of investments will go to AI start-ups. This estimate may already be considered outdated in view of new, massive investments – done, for example, by Amazon and Google in the AI start-up Anthropic (which is approximately US$4.5 billion).
#2: Limited capacities are fuelling the AI race
The investments are not only fuelling AI research but are also dragging the entire AI ecosystem with them – from the cloud to database systems and the semiconductor industry. The training of machine learning and large language models (LLMs) requires high computing power and storage capacities.
New processor series and super GPUs are clearly pushing the boundaries of what is possible. However, the facilities of chip manufacturers such as Nvidia have been fully booked for years, and prices are rising exorbitantly.
In the battle for realisable top AI performance, software tech giants such as Microsoft plan to enter the semiconductor business themselves in the coming years.
#3: Snowball effect for developer and IT tools
The AI hype is not only being fuelled from the outside. As an inherent automation technology, AI drives its own development. AI models help to create better AI models. Developers delegate time-consuming tasks to the systems, have code generated automatically, and thus massively shorten innovation cycles.
According to McKinsey, developers can increase their code generation performance by up to 45% with the support of generative AI. Smart management tools in IT, in turn, optimise computing power in the cloud and in the company’s own data centre for AI operations in the enterprise environment. This means that AI technology is currently continuing to grow unabated.
#4: AI for everyone: data democratisation
In addition to automation and optimisation, AI is also changing the use of data, especially in the combination of LLMs and natural language user interfaces (LUI, NLUI). Users can access information more easily via chatbots and Google’s Search Generative Experience approach. What used to be reserved for data scientists is now theoretically available to every employee in the company thanks to APIs.
In the future, department-specific applications will make way for a centralised, voice-powered AI solution that bases itself on curated company data, provides relevant answers in any format (e.g., written, image, or voice) and takes access rights and data protection regulations into account.
#5 Of graphs and vectors: databases
Data democratisation requires special approaches for storing, linking, indexing, and querying data. Vector databases and their ability to store high-dimensional data efficiently were among the most discussed AI topics in 2023. According to analysts, vector databases are still at the beginning of their hype cycle. However, vector search is now also integrated as a standard feature in a large number of databases.
Graph databases have further established themselves as AI enablers. Knowledge graphs link heterogeneous data into a semantic context in which they treat data and data relationships as being of equal value. This creates an optimal environment for network analyses, deep learning, ML, and AI. Alongside LLMs, for example, graphs set the necessary boundaries and priorities to make AI results more accurate, explainable and comprehensible.
#6: Responsible AI on its own
Breaking open the AI black box is becoming increasingly urgent in view of the sometimes amusing and disturbing AI failures. AI hallucinations and indirect prompt injections are just a few examples of how AI solutions manipulate and allow themselves to be manipulated.
With increasing implementation, the question of accountability arises: Who is ultimately responsible for the AI-generated decisions, predictions, and content? Legal requirements will not take effect for at least two to three years. Companies will not be able to sit this out and will increasingly have to take responsibility themselves.
#7: More than just a chatbot
AI is considered a cross-sectional technology: It is highly technologically dynamic and can be used across all industries. Its potential goes far beyond that of an LLM AI agent such as ChatGPT. Chatbots were the poster child of AI last year. However, current AI projects are much more diverse, from predictions about the global climate (such as GraphCast) to the discovery of protein structures in the human body (like AlphaFold).
#8: A look at the 2024 hype curve
The Gartner Hype Cycle for Emerging Technologies 2023 shows that AI is far from losing speed in the face of these developments, with generative AI taking the most prominent place at the summit and, according to the analysts, soon descending into the “valley of disillusionment.”
However, behind it, new AI approaches and solutions are already lining up to unleash the next hype in the coming years (e.g., AI Augmented). Seen in this light, hype is not a negative thing, but a key phase in exploring the many dimensions of a technology.
By Laxman Singh, Head of ASEAN and India, Neo4j
This article was first published by Frontier Enterprise