¹û¶³´«Ã½ researchers promote human-centered AI in new 'alignAI' doctoral network
¹û¶³´«Ã½ researchers are working on MSCA doctoral training network project 'alignAI' to align large language models with human values, focusing on fairness and explainability.

¹û¶³´«Ã½'s departments Industrial Design and Industrial Engineering & Innovation Sciences are participating in the 'alignAI' doctoral network. This four-year research project set out 1 September and is chaired by the Technical University of Munich (TUM). The doctoral network is funded with €3.55 million by the European Union under the Marie SkÅ‚odowska-Curie Actions (MSCA), and will train 17 PhD students at six Eurotech universities. ¹û¶³´«Ã½ expertise is being contributed by Martijn Willemsen, Stephan Wensveen and Carlos Zednik, who also bring ¹û¶³´«Ã½â€™s EAISI network to the table.
Aligning Large Language Models with Human Values
The use of Large Language Models (LLMs) has increased significantly with applications like ChatGPT. While LLMs offer many benefits, their impact on society and individuals hasn't been fully understood or prioritized. If not developed responsibly, LLMs could have negative consequences. The project aims to mitigate these risks by aligning LLMs with human values. By integrating expertise from social sciences, humanities, and technical disciplines, the project will address critical issues such as explainability and fairness.
About the alignAI Project
The doctoral network aims to train 17 doctoral candidates in the interdisciplinary field of LLM research and development. Two key principles guide this approach: explainability and fairness. Explainability helps build trust by making AI systems more understandable and easier to oversee. Fairness ensures that AI applications are accessible to everyone and that their decisions are equitable. The project will address practical issues in education, mental health, and news consumption to demonstrate the real-world relevance of these principles.
¹û¶³´«Ã½ Expertise
The ¹û¶³´«Ã½ researchers work in close collaboration in the project. Martijn Willemsen will develop a user-centric evaluation framework for AI that will be employed to evaluate the different AI tools developed in the project. Stephan Wensveen will work on the integration of human values into the design of LLMs, while Carlos Zednik will focus on explainability and standardization and the ethical implications and societal impact of these technologies.
The consortium consists of partners from the , of which ¹û¶³´«Ã½ is a strategic member. This collaboration highlights the importance of international and interdisciplinary cooperation in developing responsible AI technologies and addresses the need to educate the next generation of researchers on the responsible, human-centered development of AI.
More information: