September 4, 2023 – Reading time: 7 minutes
In this interview, our Head of Software and Data Technologies, Dr. Marc Großerüschkamp, and our AI developer, Mina Khosravifard, discuss the differences in the development of classic software and artificial intelligence. They focus especially on the role of data and the security of AI systems. An example of a successful AI project is the KARLI project, which used data collection planning and non-structural testing methods to improve security.
Dr. Marc Großerüschkamp: Mina, you are an experienced developer in the field of AI. What are some key differences in the development process between classical software and AI (Artificial Intelligence)?
Mina Khosravifard: There are many differences between the development process between classical software and AI; Nonetheless, I would like to point out here two of them. In contrast to traditional software development, which is typically rule-driven, AI development heavily relies on data. AI models necessitate substantial volumes of high-quality data for training, validation, and testing. Consequently, data collection, preprocessing, and management of data is an essential part in the AI development process.
One key differentiator between AI and traditional software lies in the capacity for continuous training. A common issue arises where the distribution of data used for training during the initial development phase might diverge from the data that machine learning algorithms encounter during their operation. To address this challenge, the concept of „drifts“ comes into play. Continuous training is employed as a solution to manage and adapt to these shifts in data distribution.
In contrast, conventional software generally lacks this adaptability unless explicitly programmed to update based on predefined rules or parameters or bugs.
Dr. Marc Großerüschkamp: Since the development of AI is more data-driven, can you point out how do you approach testing and debugging in AI development compared to classical software development?
Mina Khosravifard: In AI development, traditional testing approaches used in classical software development may not be directly applicable to AI models due to the Blackbox nature of models. Adapted testing methods from classical software development may not effectively capture the complexity of ANNs (Artificial Neural Network) may be helpful but not enough. Scenario-based testing and neural network specific coverage criteria are among the strategies currently being used to ensure confidence in AI system behavior and safety.
Dr. Marc Großerüschkamp: Mina, you have been working on several AI projects so far. What are some challenges specific to AI development that you have faced in your career?
Mina Khosravifard: One of the very first challenges from my experience in KARLI project is related to data collection. Regardless of huge time-consuming effort for data collection and labeling in the context of supervised learning, data collection is not a onetime effort but rather an ongoing and iterative process that plays a crucial role improvement of model performance. Continuous evaluation of data is indeed another challenge in AI development. It refers to the ongoing process of assessing the quality, relevance, and suitability of collected data for training and validating AI models.
Dr. Marc Großerüschkamp: AI models can be used to support or make decisions. So how do you ensure the ethical and responsible use of AI in your projects?
Mina Khosravifard: Unfortunately, the three Laws of Robotics are not enough to capture the complexity of human values. Ensuring the ethical and responsible use of AI in projects, especially in areas with higher levels of autonomy like autonomous driving, is highly required. Value learning is one of the ways that can be applied in the AI algorithm to check the machine’s ethnic. Value learning is a subfield of artificial intelligence that aims to imbue AI systems with human-like values and ethical principles. Value learning can help ensure that the AI algorithm incorporates ethical considerations and aligns its decision-making with societal values.
Besides technical methods of value learning, there are also regulatory approaches such as the European Union AI Act. This legislation categorizes risk classes, including risks that are deemed unacceptable and thus prohibit the use of AI in certain areas. In addition, there are applications that have the potential to pose high risks but can be permitted under specific conditions. It is necessary to prove that these applications can still be considered safe. Currently, we are working to address the unique challenges posed by AI, specifically focusing on technical measures to minimize associated risks.
Dr. Marc Großerüschkamp: Mina, I have one last question for you. Can you give an example of a successful AI project you have worked on, and how it differed from a classical software project in terms of development and implementation?
Mina Khosravifard: The KARLI project includes the development of an adaptive, responsive, and level-compliant interaction for autonomous vehicles. Unlike traditional software development projects, we place significant emphasis on data collection planning. To streamline this process, we have implemented a cloud-based system that automatically assesses the quality of newly collected data. As mentioned earlier, for testing we cannot employ the classical software testing approach. We have employed nonstructural test coverage criteria to ensure the completeness of scenarios within the dataspace. This approach has proven effective in thoroughly testing and validating our ML (Machine Learning) algorithms, enhancing overall safety by increasing coverage of possible scenarios.
Dr. Marc Großerüschkamp: Thank you Mina for answering my questions. I am really looking forward to our next steps towards safe and secure AI!