The technological development of Artificial Intelligence is rapidly evolving, and its implementation is developing in unpredictable patterns. Seventeen new research projects have received funding to support the ethical use of AI.
RAID investigates how AI is transforming the way we design digital services—and how we can ensure that this transformation is responsible, ethical, and aligned with the needs of society.
The NordRAI project investigates how generative artificial intelligence (GenAI) is transforming collaborative learning and teaching in higher education across the Nordic region.
The ORBIT consortium investigates how organizations in the Nordic energy sector develop and use artificial intelligence (AI) in responsible and regulation-aligned ways.
The project aims to develop guidelines for the responsible use of AI in predicting protein-ligand binding energies to ensure that AI methods are used responsibly in drug discovery and toxicology.
With a special focus on digital literacy and gender equality, GAIYA investigates how students and teachers in Denmark, Sweden, and Norway use and experience generative AI tools.
DARES investigates the relationship between energy systems, data and AI, focusing on the challenges of data quality for responsible AI in the green transition.
CAI-BLUE investigates how collaborative AI agents – adaptive, interactive systems that work alongside human teams – can support communication, learning, and shared understanding in blue-collar workplaces.
This project examines how vulnerability is constructed, negotiated, and practiced in using and implementing AI-automated home care in Estonia, Finland, Norway, and Sweden.
The RISK-AI project is guided by a main research question: how can organizations better identify, understand, and manage AI risks in healthcare in ways that enhance trustworthiness and support responsible use of AI?