Tension and technology: How AI can benefit defense and safety
Sander Klomp defended his PhD thesis at the Department of Electrical Engineering on July 2nd.
![[Translate to English:] [Translate to English:]](https://assets.w3.tue.nl/w/fileadmin/_processed_/a/1/csm_Sander%20Klomp%20Banner%20image_39da3ba024.jpg)
In recent years, tensions between nations have been on the rise, resulting in the spread of misinformation, abuse of personal data, and even armed conflict. Added to this, there has been exponential growth of Artificial Intelligence (AI) systems with the latest systems able to analyze vast amounts of data and easily generate realistic fake images. Such tension and technological advancement will influence each other, and the outcome of that intersection may not necessarily be positive. For his PhD research, Sander Klomp focused on the potential positive intersection of tension and technology, where image analysis driven by AI can be used to improve the safety and security of people in a world where both their privacy and physical safety are at risk.
Although the most popular AI tools are large language models such as ChatGPT or generative models that can create visual art, music, or deepfakes, AI also excels at automatically analyzing the content of image.
In his PhD research, has improved AI models for image analysis, such as for a military-grade camera system that automatically scans the surroundings for hidden explosives, or for AI tools that help maintain privacy of people in public spaces and hospitals.
Protecting people's privacy
It is well-known that AI requires large amounts of data to learn. For example, for an AI model that helps with traffic optimization and analysis, large datasets of surveillance camera footage of public spaces is required.
Clearly, storing these types of data sets poses serious privacy risks. So, what if deepfakes could be used to retain the data, but remove the privacy sensitive part?
AI models are quite adept at creating realistic faces of people that do not exist, which can in turn be used to replace the faces of people on surveillance footage. The challenge is to anonymize the faces in such a way that other AI models can still use the data for training just as effectively as they could use the unaltered privacy-sensitive data.
If privacy alone is not reason enough, another reason to anonymize images is that recent privacy legislation prohibits storing privacy-sensitive information, so being able to anonymize images effectively will soon be required to be able to have sufficient data for training AI models at all.
Klomp has shown that when the anonymization is performed in the right way, the effectiveness of AI models trained on the anonymized data is only a few percent lower than when trained on entirely real data, which is an acceptable cost.

Protecting Dutch troops from hidden explosives
Hidden explosives remain one of the deadliest threats in conflict zones. Although the type of explosives can range from roadside improvised explosive devices in Afghanistan to landmines and unexploded cluster munitions in Ukraine, the urgent need to detect these threats has remained the same.
In his main research project, Sander Klomp developed AI models to detect these threats from a vehicle-mounted camera system with 10 cameras of different spectral bands.
The main contribution is a change detection AI model, which can compare current images with those of a previous patrol, allowing it to find suspicious changes in the environment, such as new objects or digging marks.
Future
Klomp is not quite ready to let go of his PhD research just yet and is currently leading a team of engineers at ViNotion as he aims to continue the development of the camera system for the detection of hidden explosives. He hopes that it can be used by Dutch troops in the field within the next few years.
Title of PhD-thesis: . Supervisor: Peter H.N. de With (TUE). Co-supervisor: Dennis W.J.M. van de Wouw. Other main parties involved: ViNotion.