CAI-BLUE investigates how collaborative AI agents – adaptive, interactive systems that work alongside human teams – can support communication, learning, and shared understanding in blue-collar workplaces.
This project examines how vulnerability is constructed, negotiated, and practiced in using and implementing AI-automated home care in Estonia, Finland, Norway, and Sweden.
The RISK-AI project is guided by a main research question: how can organizations better identify, understand, and manage AI risks in healthcare in ways that enhance trustworthiness and support responsible use of AI?
This project introduces a normative framework based on the concept of digital dignity - emphasizing fair representation, recognition, and participatory agency - and responsiveness to citizen needs in the design and use of AI systems.
The project will adapt current and future LLMs towards a more responsible coverage and functionality that encompass the linguistic and cultural diversity in the Nordic and Baltic regions.
The AI-PROCARE project seeks to redefine public procurement of AI systems activities to foster sustainable, equitable, and engaging healthcare work environments.
This project addresses the question: How do generative search engines (GSEs) influence media pluralism, democratic discourse, and digital sovereignty in the Nordic region?
This project will create the first "fairness map" for medical AI: a clear guide to which social groups and diseases face the biggest risk of unfair treatment, and which fixes work the best.