A guest commentary by Prof. Dr. Dr. h. c. Michael ten Hompel, Director of the Lamarr Institute for Machine Learning and Artificial Intelligence and former Director of Fraunhofer IML as well as initiator of AI24 − The Lamarr Conference.
“Digitalizing everything and artificial intelligence in everything will change everything for all of us.” We said it in 2019; now, it has become a reality. In the summer of 2023, the large language model ChatGPT passed the Turing test and is therefore “officially” considered to be artificial intelligence. But what has surprised everyone is that as they have grown in size, these “transformers” have developed fundamentally new, creative abilities that were not programmed into them and are difficult to explain. Large neural networks have clearly developed their own “view of the world”: to see how this looks, we need go no further than the website for “Sora” (OpenAI), for example. You can now see more than just flat, still images there. If you enter a few lines of text (a prompt), Sora generates fascinating, high-resolution videos. AI seems to have developed an understanding of the relationships between objects. Reflections on the windows of a moving train, waves on a beach, a snowstorm or the human person itself: they are all modeled correctly and obviously haven’t just been copied into the scene, but rather have been rendered in accordance with the laws of nature and cause and effect. Generative AI, which has been trained on tens of trillions of tokens, is living up to its name: It not only interprets, but also generates new versions of the reality that it has learned. Naturally, even the latest AI systems make mistakes, and if you look closely, you will spot hallucinations over and over again – but then again, this is happening much less than with previous versions. For curious observer, one thing is becoming increasingly clear: Artificial general intelligence (AGI) no longer seems so far away. In fact, it may only be a question of having enough computing power to achieve further, exponential growth in AI capabilities. Apparently, that is how Mark Zuckerberg sees things – he has declared that he intends to build the largest AI cluster of all time with 350,000 H100 GPUs from Nvidia, in order to create the first AGI by the end of the year (but “responsibly”).