Empowering ethical AI: trust, transparency and sustainability in action

Orange Business is on the frontline of ethical AI, aligning its innovations with the EU’s AI Act and our own Data & AI Ethics Charter. Through robust governance, lifecycle management and tools like Live Intelligence, we ensure AI systems are secure, transparent and environmentally conscious, enabling our customers to achieve operational excellence responsibly.

Three key takeaways:

  • Orange Business is building trustworthy AI aligned with EU regulations and our Data and AI Ethics Charter to ensure the highest standards
  • We are pioneering transparent, secure and environmentally friendly AI to support a sustainable future
  • We are implementing ethical AI with robust governance and lifecycle management


The EU AI Act is landmark legislation that addresses the risks and vulnerabilities that traditional data protection frameworks may not cover. We are taking advantage of the opportunities it presents by focusing on compliance strategies and adapting AI offerings to align with regulatory demands.

 

 

In parallel, Orange has created its own Data & AI Ethics Charter and By Design Governance, which provides an overarching framework for governing how we approach data and AI. By embedding these principles into our operations, we ensure compliance with internal ethics standards while aligning with the EU AI Act and evolving legislation in the region.

By being proactive, we can maintain legal alignment and strengthen customer confidence in our AI-driven innovations, instilling both transparency and trust.

 

 

Building trustworthy AI with strong governance and human oversight

The EU notes the significant benefits AI can bring to society, from better healthcare and smarter living to smart manufacturing and improved sustainability. However, it wanted to ensure that the AI used is “safe, transparent, traceable, non-discriminatory and environmentally friendly.” Thus, the legislation establishes obligations for providers and users depending on the level of risk from AI. While many AI systems pose minimal risk, they must also be assessed.

The legislation is not without challenges. There is a strong emphasis on accountability, for example. Our business lines must ensure that they have human oversight mechanisms in place for high-risk systems. We have tackled this by incorporating human insight into our AI governance and ethical practices by embedding clear structures, processes and roles into AI development and operational workflows.

 

 

The development of the AI Act has significantly influenced our approach to AI practices. Our tools are developed with a “Responsible AI By Design” philosophy, ensuring that human oversight is integral at every stage of the AI lifecycle. Mechanisms for human intervention, for example, have been established in case of anomalies or ethical dilemmas.

Significant organizational changes are also required to create an AI governance plan tailored to regulatory demands. This includes creating multidisciplinary teams involving legal, technical and compliance experts and upskilling and reskilling employees to enhance productivity and broaden use cases.

This approach supports innovation and ethics running in parallel. Before launching Live Intelligence, we utilized the skills of multidisciplinary teams while testing it on 50,000 of our employees before launch. This ensured we covered all bases. The multi-LLM solution has been designed to democratize AI while aligning with our ethical principles.

 

 

Proactively embracing AI Act guidelines

Through our secure AI platform, structured governance and commitment to European regulatory standards, we continuously work to leverage AI responsibly.

We have created a secure and compliant internal platform to manage AI system models, providing centralized governance, integrated safeguards for GDPR compliance and end-to-end assistance to customers looking to adopt generative AI technologies.

In addition, we have established a Responsible AI Design Authority to qualify the risk level associated with each AI project. This expert multidisciplinary authority classifies AI systems based on their potential impact on individuals, society and critical sectors to ensure the ethical and safe use of the technology. We also have a centralized Responsible AI Committee, which provides a decision-making structure on complex questions around AI deployment, accountability and risk management.

We are also part of the EU AI Pact, an open declaration adhering to high standards for ethical AI use, fostering trust with regulators and customers. The Pact supports the industry's voluntary commitments to start applying the principles of the AI Act ahead of its entry into application. It will be invaluable in shaping AI governance through dialogue with policymakers, industry leaders and other stakeholders, enabling them to comply with the EU AI Act before its full enforcement. Although the AI Act took effect on 1 August 2024 for all 27 EU member states, a number of its provisions will not start until 2 August 2026.

We have also implemented comprehensive AI Act risk management planning, including risk assessment processes, mitigation strategies and regular auditing of AI systems for compliance monitoring.

Our processes include bias reduction. Our AI research teams continuously work to decrease bias in our algorithms. We monitor KPIs, for example, to demonstrate our transparency and fair AI approach. As a result, we were the first company to be awarded the GEEIS-AI (AI Gender and Diversity) certification, designed to raise awareness of the entire AI development chain, from design to operation.

 

 

Addressing AI’s environmental impact

AI contributes to carbon emissions primarily through the energy consumption required to train, deploy and operate large-scale machine-learning models and systems. We address this issue head-on by calculating and communicating the carbon costs of GenAI solutions sold to our customers.

Life cycle assessment (LCA) is a highly effective tool for measuring AI’s environmental impact by systematically evaluating all stages of its lifecycle. It assesses resource consumption, energy use and emissions associated with AI development, deployment and usage. We are applying LCA to evaluate AI on the cloud.

LCA pinpoints the stage or components that most significantly increase carbon footprints, such as increased energy use of data centers during training and emissions from hardware production. This enables organizations to target trouble spots by optimizing model efficiencies, for example, and implementing sustainable practices.

We are actively working to mitigate AI’s impact on our planet through energy-efficient technological innovation, sustainable AI design, renewable energy integration and transparent practices.

Navigating the AI Act in terms of cost compliance and risk mitigation

The EU AI Act comes with teeth, imposing stringent requirements and significant fines for non-compliance, and organizations must adopt proactive strategies to mitigate risks. Companies deploying AI in Europe should prioritize platforms purpose-built for managing AI systems, ensuring governance and transparency from development to deployment.

Partnering with strategic experts like Orange Business can help organizations stay ahead of regulatory changes. Additionally, establishing robust accountability structures and review protocols is critical for monitoring AI quality, especially in high-risk applications such as healthcare or legal decision-making, where compliance is a regulatory and trust imperative.

To optimize enterprises’ return on investment, Orange Business can help implement and maintain compliance with AI systems. For example, we offer our customers the first IaaS-trusted GenAI platform, which provides GenAI capabilities on a scalable usage-based model.

The EU AI Act is just the beginning

More stringent regulations and oversight mechanisms are expected to follow, but they should be seen as a real business opportunity, not a hurdle. Organizations that prepare early can reduce risks, optimize costs and seize a strategic advantage in the rapidly evolving world of AI.

Please contact our account team if you would like to find out more.

Muriel Moënza
Muriel Moënza

As AI Ethics Officer and Head of Data Governance at the Marketing & Data Office, Muriel Moënza leads the Orange Business Data & AI Democratization strategy to enhance operational efficiency and business ownership. Her professional background spans marketing, presales, sales management and IT consulting. More recently, Moënza upskilled her competencies in Ethical AI and set up a Data and Responsible AI by Design program for use cases and solutions development support.