CAI-BLUE investigates how collaborative AI agents – adaptive, interactive systems that work alongside human teams – can support communication, learning, and shared understanding in blue-collar workplaces.
This project examines how vulnerability is constructed, negotiated, and practiced in using and implementing AI-automated home care in Estonia, Finland, Norway, and Sweden.
The RISK-AI project is guided by a main research question: how can organizations better identify, understand, and manage AI risks in healthcare in ways that enhance trustworthiness and support responsible use of AI?
This project introduces a normative framework based on the concept of digital dignity - emphasizing fair representation, recognition, and participatory agency - and responsiveness to citizen needs in the design and use of AI systems.
The project will adapt current and future LLMs towards a more responsible coverage and functionality that encompass the linguistic and cultural diversity in the Nordic and Baltic regions.