AI Design

Quickly gather your own thoughts to ensure your AI system is accurate, fair and transparent

!

Your AI System

AI system phase you were involved in

Your AI System

Explore other phases

What system phase you were involved in?​

Your role

Your role

Based on what you know about the project during the stage


Using this session to evaluate your system in various phases?

Download the combined actions in JSON and send the session link to your email

Problem Statement

A staggering 60% of AI development costs stem from re-development and post-deployment fixes, highlighting the urgent need for tools at the design stage. Existing design-stage checklists often target specific roles; for instance, product managers focus on data representation and algorithmic fairness, whereas AI auditors prioritize safeguarding fundamental rights. However, this role-specific focus leads to fragmentation, stifling interdisciplinary creativity and collaborative problem-solving.


Responsible AI Guidelines

To address these gaps, Nokia Bell Labs has developed 22 Responsible AI Guidelines—actionable steps, understandable across diverse team roles, and grounded in best practices and AI regulations. These guidelines can be applied across the three phases of the AI development lifecycle: development (1), deployment (2), and use (3). They are grounded in scientific literature, verified for compliance with eight ISO standards, and cross-referenced with the EU AI Act. Each guideline specifies the responsible role: designer (D), engineer/researcher (E), or manager/executive (M).


Methodology

The guidelines were developed through a structured, iterative co-design process with AI researchers, developers, and compliance experts. The process began by identifying key responsible AI principles through a detailed review of research papers from leading AI conferences. This groundwork led to an initial catalog of guidelines, which was then refined through interviews with AI practitioners and expert panels to ensure alignment with ISO and EU AI Act standards.


Rather than a one-size-fits-all approach, these guidelines emphasize flexibility and context. Each one addresses specific team roles—like decision-makers, engineers, and designers—with tailored examples. For example, a harm mitigation guideline suggests decision-makers allocate resources, engineers set up safeguards, and designers gather user feedback.


To further support this flexibility, Nokia Bell Labs developed an interactive tool that guides users through the guidelines in a way that feels personal and relevant to their work. The tool prompts users with specific questions and allows them to document their decisions about each guideline.


Nokia Bell Labs' guidelines offer a shared foundation, enabling teams to embrace ethical AI as a core part of development, not an afterthought. This proactive approach not only strengthens accountability but also prevents costly, post-hoc fixes, setting a foundation for AI that's responsible by design.


Research

RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles, CSCW 2024


Privacy policy

Feedback