(Re)Capturing AI: Governing generative search engines in the Nordic countries

This project addresses the question: How do generative search engines (GSEs) influence media pluralism, democratic discourse, and digital sovereignty in the Nordic region—particularly through their impact on news distribution, information diversity, and public trust—and how can governance frameworks ensure that GSEs’ development and use promote accountability, transparency, and alignment with Nordic public values? Addressing this is urgent, as generative AI is rapidly reshaping how citizens engage with news and information. A particularly transformative development is the rise of GSEs such as ChatGPT Search or Microsoft Copilot—AI-powered systems that, unlike traditional search engines displaying ranked links, generate direct, synthesized responses to queries based on retrieved content. While promising greater efficiency and personalization, they also introduce serious risks concerning accountability, transparency, media diversity, and misinformation. 

The project explores the broader implications of GSEs for media pluralism, democratic discourse, and digital sovereignty in the Nordic region. It examines their effects on news distribution, the visibility of Nordic news sources, and public trust in AI-generated content, especially in communication ecosystems built on public service values and smaller language areas. It also evaluates the governance frameworks currently in place and recommends policy options aligned with Nordic democratic and welfare state principles. 

An interdisciplinary team of researchers from Denmark, Finland, Norway, and Sweden combines expertise in media and communication, political science, and information systems. The project applies a unique mix of audits of US- and Europe-based GSEs, interviews, surveys, and policy analysis. It will evaluate the sourcing behavior, information accuracy, and norm alignment across GSEs and languages (WP1); explore the economic and editorial implications for Nordic news publishers, and their strategic responses (WP2); study user trust, information-seeking, and political learning in AI-powered search environments (WP3); and analyze how current and emerging regulatory frameworks promote—or hinder—responsible, transparent, and accountable AI governance (WP4). 

Through comparative analyses, stakeholder engagement, and policy recommendations, the project will help strengthen media diversity, public trust, and democratic resilience in the face of rapidly evolving generative AI technologies. 

Contacts

Bodil Aurstad. Photo: NordForsk

Bodil Aurstad

Special Adviser
Profile picture Mathias Hamberg

Mathias Hamberg

Special Adviser

Sign up to our newsletter

Get our newsletter to receive news and updates on NordForsk research areas and projects.

Sign up here