Implement AI responsibly with an ethics framework

This is the fourth article in a five-part series that examines the successful adoption of enterprise artificial intelligence (AI). This blog discusses responsible AI and how organizations can define guiding principles, employ tools to assess their effectiveness, and establish a framework for building ethical AI solutions. The other articles in the series address AI strategy, building a foundation for AI success, AI and data governance, and MNP Digital’s delivery framework. If you missed the earlier blogs, start from the beginning with An Introduction to Building an Artificial Intelligence Strategy.

“Virtually every big company now has multiple AI systems and counts the deployment of AI as integral to their strategy,” states Joseph Fuller, professor of management practice at Harvard Business School, who co-leads Managing the Future of Work, a research project that studies the development and implementation of AI.

As AI technology advances and gains increasing support from a network of Internet of Things devices, personal devices, and people, its effect on both customer-facing and operational business will only escalate. AI and its subsets, such as machine learning (ML), deep learning (DL), and natural language processing (NLP), have raised fundamental questions about what responsible use of these systems should be, what the systems should be allowed to do, what risks are involved in using the systems, and how we can control both the technology and the risks.

Recently, a series of deep fake videos featuring Tom Cruise emerged that got the attention of millions of TikTok users. This is a classic example of how the power of AI can create negative outcomes. There are numerous examples where the power of AI can harm society if the intentions are misguided and when ethics and regulations are not tied into every application development. 

The ethics of AI

As the Stanford Encyclopedia of Philosophy pointed out, the ethics of AI is predominately focused on concerns over the use, control, and impact of new technologies – which historically has been a typical response.

“Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape).” [1]

While some concerns about AI are needless – like the idea that it will take over the world, steal our jobs, and dominate human beings – there are considerations around AI ethics and responsible use that businesses must address.

As Harvard Business Review reported, “Companies are leveraging data and artificial intelligence to create scalable solutions – but they’re also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app… Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.” [2] 

Establishing an AI Ethics Framework

This doesn’t mean that AI shouldn’t be developed or implemented. But it does signify that “building ethical AI is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.” [3]

That’s why it’s critical for organizations to have a framework for building ethical AI solutions that includes clearly defined principles and tools to assess their effectiveness. A framework helps to ensure that AI systems perform in a safe, secure, and reliable way and that there are safeguards to prevent unintended adverse impacts. This framework provides the required guardrails to keep the innovation in the AI space governed and positioned towards continuous monitoring of compliance to the development life cycle.

In our Ethics Framework, a company must start an AI project by adhering to the guiding principles – they act as the anchor for all AI projects. The company does this by ensuring each principle is realized by the AI solution, and that the realization of each principle does not contradict any other.

Diagram of AI Ethics Framework

AI Ethics Framework (Large view)

They then must verify that the principles are being adhered to throughout the lifecycle of the AI project using open-source and peer-evaluated tools. At the end of the project, they must again verify that the guiding principles are realized fully by the application, ensuring none are violated. The framework cycle is repeated once again for the next project to make sure that the organization is always acting ethically with regards to AI. Every check and balance in each AI project must strictly adhere to five guiding principles.

The five most important guiding principles for AI

There are many examples of organizations developing their own set of principles to guide their work in AI. Notable examples include Google’s AI Principles [4] and Microsoft’s Responsible AI. [5] At MNP Digital, there are five principles we believe to be the most important.

1. Privacy and data protection

One of the most, if not the most, critical component of any AI/ML project is the use of data. Therefore, discussions around privacy are critical to developing any AI solution. Any collection and use of data must be done with the utmost discretion, with the consent of those whose data is being collected and used, and within the legal framework of the country(s) where the project is being conducted.

AI systems must guarantee privacy and data protection throughout the system lifecycle to allow individuals to trust that data collected about them will not be used to discriminate against them unlawfully or unfairly.

Naturally, the type of data used depends on the type of project being conducted; for example, any project involving facial recognition will require many images of peoples’ faces. The quality and integrity of data used to train the AI system is paramount to its performance, so it is critical to ensure that training data sets are free from bias or error.

2. Transparency

People developing AI solutions must be open about how and why they are using AI, clear about its limitations, and willing to explain the behavior of these solutions. To support this principle, the data sets and processes contributing to the AI system’s decision, including those of data collection and algorithms used, should be documented to allow for traceability and increase transparency.

3. Bias and discrimination

Understanding that data sets used by AI systems may have historical bias and those biases could lead to unintended consequences of discrimination against certain groups of people is imperative. Bias clearly hurts those discriminated against, but it also hurts everyone by reducing our collective ability to participate in the economy and society. Organizations should strive to remove identifiable and discriminatory bias in the collection phase where possible.

4. Beneficence and non-maleficence

Any AI solution must avoid all harm, and the benefits of any solution must be weighed against the risks and costs. For example, we could avoid harm by using AI to make workers more efficient and more productive, rather than simply automating them out of their jobs. This would ensure human control of technology.

5. Autonomy

With AI, autonomy becomes rather complex: when we adopt AI and its smart agents, we willingly cede some of our decision-making power to machines. Thus, affirming the principle of autonomy in the context of AI means striking a balance between the decision-making power we retain for ourselves and that which we delegate to artificial agents.

What seems most important here is what we might call “meta-autonomy,” or a “decide-to-delegate”
model: humans should always retain the power to decide which decisions to take, exercising the freedom to choose where necessary and ceding it in cases where overriding reasons, such as efficacy, may outweigh the loss of control.

The AI assessment cycle continues

Once a company has established its guiding principles, they must put tools in place to check whether their AI applications are abiding by them. Google, Microsoft, IBM, and other firms have developed appropriate tools with the drawbacks of unethical AI in mind.

Understandably, the paradoxical nature of using an algorithm to evaluate an algorithm is a fear of many. To counter this, most of the tools that companies use are open-source and have been peer-evaluated to ensure honesty and reliability.

To learn more about specific AI assessment tools, dig deeper into the complexities of AI ethics, and learn how to build and adhere to an ethics framework, we invite you to join our five-day immersive AI workshop. The workshop will cover how to assess and build an AI strategy customized for your organization as well as how to get full value from your implementation, avoid unnecessary risk, and ensure responsible AI use. 

Learn more about MNP Digital’s AI Workshop Series >

Connect with an MNP Digital advisor to discuss your artificial intelligence strategy.

Connect with us to get started

Our team of dedicated professionals can help you understand what options are best for you and how adopting these kinds of technology could help transform the way your processes function. For more information, and for extra support along the way, contact our team.

Author: Adriana Gliga-Belavic

Adriana Gliga-Belavic, CISSP, CIPM, PCIP, is a Partner, Privacy and Data Protection Lead with MNP in Toronto. Passionate about security and privacy, Adriana helps public and private clients build pragmatic strategies and privacy programs to maintain customer trust and find the right balance between business results, proactive cyber resiliency and enhanced privacy.