The goal of NordAId is to develop an interdisciplinary meta-framework for trustworthy AI in public decision-making in the Nordic-Baltic context. A collaboration between the universities of Oslo, Copenhagen, Uppsala and Vilnius, NordAId seeks to identify the limits and benefits for AI-based decision-making. This will be achieved by testing with partners and civil society AI tools in the hard case of asylum, which covers multiple high risk categories in the EU’s AI Act. The project builds on a Nordic-Baltic dataset of almost 1 million asylum decisions, and the team brings world-leading expertise on adaptive AI governance, Explainable AI, uncertainty quantification, and participatory design.
Nordic and Baltic states present a puzzle in the responsible use of AI in public decision-making. The region is uniquely well-positioned with high quality datasets, strong AI communities, and relatively high public trust in human and even AI-based decision-making. However, a range of challenges have arisen. Data is difficult to access, insufficient attention has been paid to trustworthiness and Nordic regulatory requirements, and, as a result, existing efforts tend to be under-specified and focus on ‘low hanging fruits’ that are difficult to scale.
The NordAId project will seek to address this by focusing on asylum decision-making. Empirical evidence of bias and noise in human decision-making means that there is a strong interest in AI in this domain. Vice versa, asylum decision-making concerns highly vulnerable individuals, and AI deployment in asylum falls within half the high-risk categories in the EU AI Act. As a hard case, asylum decision-making thus enables us to address wider concerns about AI in public administration and courts.
The project team is uniquely positioned to develop such an approach for trustworthy AI. In a previous NordForsk project, we built a unique data infrastructure of asylum decisions (s2-Data). It permits application of data science techniques on large volumes of decisions, while restricting and minimising access to personal data. This infrastructure allows NordAId to develop, test, and evaluate a spectrum of generalisable AI models with user partners. his way, the project can develop a ‘meta-framework’ for assessing and extending the trustworthiness of AI-based decision-making, focusing on improving decision-making through value-creating solutions that go beyond merely improving efficiency.
Contacts