top of page
Search

Scaling trustworthy AI: How to turn ethical principles into global practice

  • meerimseidakmatova
  • Jan 20
  • 2 min read

  • Trustworthy artificial intelligence (AI) is a global priority, as societies shift from broad ethical principles to the more challenging work of putting them into practice.

  • Universities are emerging as key actors of responsible AI, embedding fairness, privacy and accountability through ethics-by-design methods and interdisciplinary governance.

  • Collaboration and education in academia are shaping sustainable AI for the future, advancing innovation while safeguarding the social values that underpin stable economies and societies.

Trust serves as a foundation for stable economies; it sustains democratic institutions and underpins international cooperation. Generative artificial intelligence (AI) accelerates scientific discovery, economic growth and societal change – transforming how we innovate, make decisions and solve global challenges.

Now more than ever, I see AI as a strategic priority, one that requires international cooperation, institutional capacity and culturally adaptable approaches. While we have defined trustworthy AI, articulating principles is just a first step in the much more challenging task of scaling AI in practice.

Implementing ethical principles

Trustworthy AI has evolved from an abstract aspiration to an operational necessity. As generative AI systems shape health, finance and public services, societies recognize that ethical principles such as fairness, transparency and accountability must become practical frameworks guiding real-world technologies.

This shift from articulating values to implementation marks a pivotal moment. It demands standards, methodologies and governance mechanisms that can scale worldwide, yet remain flexible for diverse cultural and economic contexts. I see this as a global priority in which institutional and technical capacity realizes ethical intent into practice.

Have you read?

In 2024, the United Nations General Assembly adopted a landmark resolution promoting "safe, secure and trustworthy" AI. It called for international cooperation to ensure generative AI development respects human rights, reduces digital divides and advances sustainable development, underscoring the shared responsibility of governing AI technologies. The ETH Zurich community contributes to this goal by connecting scientific insights with policy action. Computer scientist Andreas Krause, for example, serves on the UN’s Global AI Advisory Body, offering expertise on how AI can be governed for the common good.

Global collaboration is central to implementing trustworthy AI. To advance this goal, Meta, IBM and more than 50 organizations – including ETH Zurich and CERN – founded the AI Alliance, which grew to more than 140 members in 23 countries in its first year. In 2025, it launched several initiatives, including the development of a roadmap for Responsible and Strategic Open-Source AI Innovation in Europe and Beyond in collaboration with ETH Zurich’s AI Ethics and Policy Network.

 
 
 

Recent Posts

See All

Comments


CONTACT US
Rue des Savoises 15,
1205 Geneva, Switzerland
 
Email: contact@portolan-association.org
WORK WITH US
Interested working with us?
Then send your cover letter and CV to our Email.

©2025 by Portolan

FOLLOW US
Connect with us on LinkedIn to stay updated on our latest news and announcements.
  • LinkedIn
bottom of page