Artificial Intelligence is becoming central to the evolution of the global economy: according to Gartner, by 2023, 40% of large companies will employ AI in their processes.
Moreover, more than 50 countries, including Italy, have now presented their own National Strategy regarding AI, to invest resources in the sector and thus ensure they remain competitive in the global market.
Even beyond the economic sphere, AI is experiencing a moment of great excitement: it is now applied to countless sectors, from medicine to space exploration, and the innovations multiply year by year, with fascinating ramifications.
2021 was a particularly fruitful year: just think of the progress made in the field of Deep Learning and the integrations between NLP models and generative AI software, which allow the creation of sounds, images, or videos from written inputs.
2022 promises to be another year of great innovations, but what will be the most interesting developments? It is not easy to give a definitive answer, but by aggregating the trends identified by sector research and the opinions of our engineers, we have put together what will probably be the most important trends in the field of Artificial Intelligence for 2022.
Natural Language Processing (NLP) models are among the most widespread Machine Learning applications in our daily lives: from the information retrieval function of search engines to automatic translations, it is practically impossible not to use them every time we use our smartphone or computer.
The most surprising implementations have been presented in the last couple of years: thanks to models like GPT-3 or Bert, NLP models are now able to generate texts indistinguishable from those composed by humans. These new models can generate marketing copy, automatically analyze or translate documents, and adequately assist in purchase decisions for online customers.
So far, the greatest progress has been made by increasing the amount of data and parameters available to AI. The latest NLP model presented by Microsoft and NVIDIA in October, Megatron-Turing NLG 530B, is a giant based on 530 billion parameters.
In 2022, we can expect even more colossal NLP models (OpenAI is already working on GPT-4, which should have 100 trillion parameters), but we also believe we will see the emergence of new types of “distilled” and optimized models that can perform the same functions with more streamlined architectures.
Given the increasing interconnection of our everyday objects to the network, it is not surprising to discover that the number of cyber-attacks has been continuously increasing for years. In 2021, the World Economic Forum included cyber-attacks among the main global risks, along with increasing inequalities and climate change.
In 2022, experts expect new investments and attention on AI as a cybersecurity tool, to monitor that their networks are secure and ensure a new level of security in AI models.
Already in 2021, we had the opportunity to see AI software capable of integrating various functions into a single tool: as we saw, for example, with the case of Dall-e, new applications capable of combining very different AI functions are emerging.
This type of AI is called multimodal and is a more advanced, more “intelligent” AI that can process data from different sources to provide more complex and coherent responses to the stimuli it receives from the outside. It usually requires skills in natural language processing (NLP) and image processing, but also integration with various types of sensors and IoT systems to interact with the real world. The different capabilities are combined thanks to very advanced architectures, composed of several layers of AI algorithms with different data processing capabilities.
Thanks to this complex system, multimodal AI can generate more effective responses compared to the more widespread unimodal AI systems today. A multimodal AI can, for example, interpret data detected by different sensors to identify potential problems in the production chain and then intervene automatically thanks to integration with robotic systems; it can also receive messages and images from potential customers to provide purchase suggestions in real-time, taking into account direct requests but also any unconscious inputs contained in the tone of the text or the images sent.
These types of AI could find wide application in robotics and industrial automation in 2022 (such as in the field of preventive maintenance), but also in the marketing and sales sector and the medical-health sector. In these contexts, multimodal AI could ensure a higher degree of automation and effectiveness, without supervision.
After the excellent results of Deep Mind in the field of protein folding research and the results obtained in the search for new drugs against SARS-CoV-2, there are great expectations for the application of the most sophisticated Deep Learning methods to biomedical research in 2022, for example, for the search for RNA-based therapies and vaccines.
Deep Learning solutions will likely find new applications in other scientific fields as well. From mathematics to astrophysics, from materials science to climatology, the support that AI can provide to researchers is enormous: it can quickly analyze a large amount of empirical data, find unexpected correlations between very distant elements, and replicate very complex models to better explain scientific phenomena.
AI is based on the aggregation, analysis, and exploitation of data. To date, the ways we collect and use data, however, have some limitations and weigh on the evolution of technology.
On the one hand, it is becoming increasingly evident the need to develop AI that can deliver results even with little data, as in the context of small and medium-sized enterprises. To truly make the potential of this technology accessible to everyone, it is necessary to imagine a different type of AI that can sustain itself on a reduced volume of data. This will mean developing new data collection strategies, perhaps having fewer but more accurate and functional data, and inventing new “learning” models for machines.
On the other hand, there is the problem of data excess. Collecting billions of data risks being more of an obstacle than a support to the development of Artificial Intelligence if the collection method is confused and there is no effective selection criterion or if there is no clear vision of the problem to be solved. Today, many companies collect large amounts of data, often with great expenditure of resources and energy, without knowing what they are trying to extract from the data.
According to various experts, it will be essential to untangle these two knots to best advance the technology, and 2022 seems to be the right year to do so. In the next twelve months, we should indeed see a greater spread of strategies such as Transfer, Active, or One Shot Learning in Machine Learning, the development of Neural Networks capable of competing with each other (defined as Generative Adversarial Networks), and the affirmation of an increasingly data-centric AI.
Marketing Specialist at AIDIA, graduated in International Studies in Florence, passionate about history, economics, and the bizarre things of the world.
At Aidia, we develop AI-based software solutions, NLP solutions, Big Data Analytics, and Data Science. Innovative solutions to optimize processes and streamline workflows. To learn more, contact us or send an email to info@aidia.it.