Monitoring Hub for Mapping and Surveying Responsible Artificial Intelligence and Societal Engagement in the Nordic Region

AI is increasingly used in public administration across the Nordic-Baltic region to improve efficiency, speed, and decision-making. However, current AI governance largely focuses on individual rights like transparency and privacy, often overlooking broader societal impacts and structural risks such as exclusion, bias, and diminished institutional legitimacy. This project responds to this gap by introducing a normative framework based on the concept of digital dignity - emphasizing fair representation, recognition, and participatory agency - and responsiveness to citizen needs in the design and use of AI systems. Beyond automation, the project draws attention to how AI tools affect discretionary decision-making in the public sector by steering decisions in ways that are not easily visible or accountable. The project proposes the creation of a Nordic AI Monitoring Hub, a long-term regional infrastructure that will track, compare, and make visible the societal impacts of AI systems used in public administration by generating a set of public-facing, regularly updated outputs: an open-access database of mapped AI systems in public services; an interactive dashboard visualizing cross-country citizen and expert surveys on AI systems; a repository of co-authored policy briefs and governance toolkits tailored to public authorities, civil society actors and the wider public. These accessible outputs through the Hub will support comparative learning, civic engagement, and early detection of systemic risks.  

Through a mixed-methods research strategy, the project will map AI applications across policy areas and classify them using its normative framework. Special emphasis will be placed on empirical investigations of human-AI interaction in knowledge-intensive and discretionary administrative contexts (e.g., to interpret eligibility rules or draft communication). Key innovations include a governance framework that complements existing legal standards by addressing collective, institutional, and democratic dimensions of AI, and a participatory monitoring infrastructure designed for long-term, regional use. The project will also investigate how public confidence in political institutions and civil services evolves just as AI reshapes government operations, thereby addressing both technological and institutional legitimacy. The project is implemented in collaboration with public authorities, civil society organizations, and researchers and builds on regional initiatives. 

Contacts

Bodil Aurstad. Photo: NordForsk

Bodil Aurstad

Special Adviser
Profile picture Mathias Hamberg

Mathias Hamberg

Special Adviser

Sign up to our newsletter

Get our newsletter to receive news and updates on NordForsk research areas and projects.

Sign up here