Can enterprises set the standard for responsible AI? 

AI is opening doors we didn’t know existed — offering enterprises new ways to innovate and scale. From streamlining workflows to delivering personalized customer experiences, it’s revolutionizing how work gets done. But AI's transformative power comes with a responsibility businesses cannot ignore. They must balance innovation with ethical considerations like privacy, bias and sustainability.  

Legislative changes are also beginning to shape the shifting AI landscape, with the Blueprint for an AI Bill of Rights providing a framework for ethical AI usage in the United States and the EU AI Act codifying responsibilities around development and usage into law.

As AI reshapes entire industries, enterprises find themselves at a pivotal moment. They are not just watching this transformation. They are the key players.

So, the question isn’t just how AI can drive business but how enterprises can lead the charge on responsible AI. How do leaders shape AI systems that not only serve business goals but also prioritize fairness and transparency?

Building trust through explainable AI

At the heart of trustworthy AI is explainable AI (XAI), AI systems that provide clear, understandable insights into how decisions are made.

For example, if an AI is reviewing health data and providing diagnoses, doctors must understand how the system is coming to these decisions and be able to explain results to patients. Without transparency, there is no trust in the system and little visibility over potential errors or biases.

By providing insights into how AI makes decisions, organizations can effectively manage biases, uphold accountability and create trust — essential steps in setting a new standard for fair AI.

Evolving privacy and data challenges 

AI is fueled by data, but as data needs grow, so does the risk of exploiting or mishandling sensitive information.

Global data privacy regulations establish clear standards for handling personal data, but these laws look different in different places. Europe’s General Data Protection Regulation (GDPR) laws offer some of the most comprehensive data protection in the world, impacting businesses that process the data of European citizens regardless of their location. China’s Personal Information Protection Laws (PIPL) are equally stringent but reflect the country's unique regulatory landscape — with severe penalties for breaches.

Comprehensive governance structures are key for global enterprises to effectively navigate the collection, storage and use of data across regions. These systems must ensure compliance while also fostering trust with users, customers and stakeholders. But these structures must evolve with the type of data being collected. In September 2024, California passed a law protecting neural data, information directly derived from brain activity, such as brainwave patterns. As AI systems begin to analyze things like facial expressions, biometric data and emotional states, organizations must carefully consider how to store and harness all types of data fairly.

Tackling bias

AI models, if built on biased data, can perpetuate those biases or fail to represent the experiences of all users. These inbuilt biases can stem from poor representation across the teams developing AI. Talking to ITN Business, Valtech, explained: race and socioeconomics. If you have a base model that is not representative of the customers it's serving, biases can creep in, consciously and unconsciously.”

To address bias, it's crucial that AI systems are trained on diverse datasets representing various populations and contexts, with support from diverse teams. Organizations should also establish ethical oversight committees and clearly define roles throughout the AI lifecycle to ensure that inclusivity is built into systems and processes. Regular audits are key to identifying and mitigating biases as they arise.

Fairness is not just a technical challenge. It is a critical business responsibility. 

Spotlighting sustainability

AI models consume significant amounts of energy, with large language models relying on power-hungry data centers.

The International Energy Agency (IEA) reports data centers account for 1 to 1.5% of global electricity use. In response to growing sustainability concerns, the EU Green Deal highlighted the importance of reducing these environmental costs, encouraging tech companies to minimize energy consumption. For both enterprises and AI innovators, responsible AI leadership requires a careful balance between innovation and sustainability.

Leaders are already taking action to cut AI energy consumption. Google aims to run on "24/7 carbon-free energy" by 2030, and Microsoft has pledged to be carbon-negative in the same timeframe.

Companies are actively cutting AI’s energy demands by optimizing models, upgrading to efficient hardware and shifting to edge AI. OpenAI is finding ways to train models with fewer parameters to use less power, while NVIDIA, a leader in AI hardware, is creating energy-efficient processors to lower AI’s carbon footprint. Siemens is leveraging edge AI to process data locally, reducing dependence on energy-intensive cloud servers.

These strategies not only cut energy consumption but also enhance AI’s scalability and performance.

It’s important to remember that AI itself offers powerful opportunities to boost efficiency and reduce energy consumption across industries. As Paul Varlet, Strategy Partner at Valtech, explains, "[AI] is bringing amazing capabilities to climate change mitigation, sustainable agriculture, pollution control and biodiversity conservation as well.” 

Nick Townend, Director of Product – eCommerce at CPC Farnell, part of the Avnet Group, is equally optimistic about the future of AI and sustainability: "AI has huge potential to help a global distributor like us. We can start to see trends, manage inventory more efficiently and reduce our footprint by placing inventory in the right locations at the right time."

A call to action for enterprises

As AI continues to shape industries and redefine organizations, enterprises must put ethics at the heart of strategy. Responsible AI is not just about compliance with regulations. It’s about actively shaping systems that are fair, transparent and sustainable.

But there is still a long way to go. A 2023 survey by conversational AI provider Conversica found that though 73% of US senior leaders consider ethical AI guidelines important, only 6% had developed them. 

Explainable AI is essential in fostering trust, but it is only one piece of the puzzle. Organizations that want to lead the way in this new world must be brave enough to proactively shape the ways we build protections around AI — ensuring everyone has a seat at the table.

As Stephanie Shine, Strategy Principal at Valtech, shared during a 2024 Ethics in AI panel: “It can't just be technologists or leadership in a silo. Bring together technologists, legal and compliance experts, data governance experts and employees from the frontline who understand the business.”

By owning these ethical responsibilities, enterprises can lead the way in shaping trustworthy AI that drives innovation while servicing the greater good — making sure its transformative power benefits all.