Want to start reading immediately? Get a FREE ebook with your print copy when you select the "bundle" option. T&Cs apply.
The Three Pillars of Responsible AI
The following is an edited extract from Responsible AI.
Responsible AI is not primarily a technical problem to solve. Yes, AI in general is riddled with ethical problems, and yes these problems are the ones that will make headlines, but they are only the tip of a rather large iceberg. At the top sits the ethical problems you are witnessing in your AI, but as you dive beneath the surface you will quickly find that these problems are most often rooted in poor people management and operational processes, not in technical execution.
Essentially, that means that in addition to the technical layer of Responsible AI, you also have a people and a process layer. So, it doesn’t matter how good the decisions around your technology are if you do not have people who are trained to implement those decisions, or a process for carrying through and scaling the decisions. If you make the mistake and jump immediately to applying your technical solutions layer without first examining the people and process layers, at best you will see a few instances of positive impact on your AI, but at worst you will see your time and resources go to waste in an initiative that never produces any of its desired results. In order to reap the benefits of Responsible AI, you need to address all three layers in your ethics solutions.
The purpose of having a robust Responsible AI strategy is to ensure that you are tackling not only the symptoms of an ethics problem, but that you are also getting all the way down to the root causes. In order for your AI to be reflective of your organization’s foundational values, your values must be reflected throughout all three layers. Which is why, when it comes to enabling Responsible AI, there are three pillars to a holistic and sustainable strategy. As you may have already guessed, the Responsible AI pillars align with the three layers of ethical solutions: People, Process and Technology.
People
Starting with the first of the three, we have the People Pillar to your Responsible AI strategy. With each pillar comes an overarching question that guides the thinking and solutions for that pillar. For your first pillar, we are asking the question “who is building your AI?”.
As the name implies, this pillar focuses directly on the people behind the technology. It may seem counterintuitive at first, considering your goal is to embed ethics into the technology, not the person. But you can’t have AI without people, and so if you don’t have your foundational values reflected within your teams, you certainly won’t have your values reflected in your AI systems. This means that you need to ensure the people who are building your AI systems, or who are using AI tooling in their daily workflow, are trained in how to implement the relevant foundational values, have incenÂtives aligned to furthering the goals of your Responsible AI strategy and know how to identify ethical challenges in their work. Above all else though, the People Pillar relies on building an open company culture that is not only receptive, but supportive of Responsible AI.
Process
Moving a layer up, we have the Process Pillar. Here, we are seeking solutions to the question “how is the AI being built?”.
Now that you understand who is building your AI, you need to underÂstand how that AI is being built. Again, it may seem strange at first to be looking at operational processes when you are concerned about the techniÂcal outcome. But good AI practices lead to good AI, so having processes that support the value alignment of your AI is essential. This is where AI governÂance takes centre stage, as you will need to ensure your ethically trained teams have the right workflows, policies and checkpoints in place that encourage and facilitate values-based decision making. Think of this pillar as the one that builds protocol and structure for ethics, ensuring that the critical decisions influencing the ultimate outcomes of your AI systems are reflective of your foundation values.
Technology
To complete the trifecta we have the Technology Pillar. Having answered the who and how in the first two pillars, it is in this third pillar that we ask the question “what AI are you building?”.
It is not until this final pillar that we turn our attention to the technology your organization is actually building, and the reason to have started the Responsible AI in the first place. It is incredibly important here to understand that it does not matter whether your organization is building an AI system to be sold on the market, procuring AI solutions to support internal operations or customizing your own internal AI systems, your Responsible AI strategy must hold any and all AI associated with your organization to the same ethiÂcal standards. Another company can be responsible for building the AI systems your teams utilize, but it is your organization that will be held accountable at the end of the day for the consequences of poorly designed technology. This all means that when it comes to implementing ethics, you need to ensure your AI and data pracÂtices align with your foundational values, keep up to date on developments in Responsible AI techniques and have the right tooling in place that supports responÂsible AI development. Your People Pillar built the skillsets for Responsible AI, your Process Pillar built the mechanisms for carrying out Responsible AI and now your Technology Pillar will build the actual AI responsibly.
There you have it, the three pillars of a Responsible AI strategy. Remember, every successful strategy will incorporate elements from People, Process and Technology without preference to one over another. Each of the three pillars builds on the others to create an interdependent and intricately simple system which, when executed with intention and care, results in holistic Responsible AI solutions addressing ethical problems at every layer.