0 Items: 0

Want to start reading immediately? Get a FREE ebook with your print copy when you select the "bundle" option. T+Cs apply.

Ethical Data: Putting Principles into Practice (A Look at Google)

Google recently published a set of principles for ethics in Artificial Intelligence (AI).

This presents an opportune moment to discuss some of the concepts that my colleague Dr Katherine O’Keefe and I set out in our book, Ethical Data and Information Management, not least because the origin of these ethical principles and the inevitable scepticism with which they have been greeted present a teachable moment when examined through the lens of our E2IM framework for Ethical Information Management.

Why is Google doing this now?

Google has been at the forefront of data analytics and the Big Data Revolution since before the term “Big Data” was coined by Doug Laney in Gartner in the early 2000s.

Google’s work in autonomous vehicles and digital assistant technologies has been equally pioneering, as has their work in robotics and other applications of machine learning technologies.

Googlers, as their staff are known, extol the virtues of the organization as it develops new technologies (or new ways to apply existing technologies) to improve the world.

So why wait until now to define a set of ethical principles?

Historically, Google had a simple code of conduct; “Don’t be evil” they told their staff, prospective staff, and investors and customers. Indeed, these words were included in Google’s 2004 IPO prospectus.

However, over the years this core principle has been pushed into the background, with it being abandoned in 2015 with the restructuring of Google into Alphabet, albeit the words are retained as the very last sentence of Google’s internal code of conduct.

And everyone reads to the end of the terms and conditions as we all know.

In parallel, Google had begun conducting R&D projects that would historically have been the preserve of the military and defence sector and organizations such as DARPA (who, ironically, sowed the seeds for the modern internet with the development of ARPANet way back in the dim and distant past).

For example, Google had signed contracts with the US Government to provide artificial intelligence and machine learning technologies to improve the effectiveness of drone-based weapons platforms through an initiative called Project Maven.

Internally, the organization tried to downplay the value of the contracts when faced with internal dissent from staff, who still believed that they shouldn’t “be evil” and perceived the enablement of smart remote-controlled or autonomous weapons platforms to be somewhat on the evil side of the 'Creepy Line' that Eric Schmidt used to famously talk about.

It turned out, however, that the contracts were expected to be significantly more lucrative than staff were lead to believe initially. Deals were forecast in the region of $250 million (versus the $9 million that was initially disclosed).

Google has subsequently said that they will not provide research to weapons platforms, but will continue working with the military. And to try to ensure that they will not be 'evil' in the work that they are doing with the military, a set of ethical principles has been developed.

The E2IM Lens

The E2IM framework we describe in  Ethical Data and Information Management looks at the ethics of information management at three levels: the ethic of the organization, the ethic of the individual, and the ethic of society.

When we look at the evolution of Google’s ethical principles through this E2IM lens, what we see is:

  1. The ethic of the individual in staff members giving rise to internal dissent in the organization. Staff members in Google, whether directly involved in the projects or not, were unhappy that their efforts would be associated with the development of Lethal Autonomous Weapons Systems. (We discuss the issues and implications of LAWS in Chapter 4 of the book)
  2. There was concern among Google staff members that the ethics of society would give rise to negative media coverage if Google’s involvement in LAWS platforms was disclosed. There were concerns expressed that any such negative coverage would impact on the positive PR Google was generating for the increased use of AI and machine learning. This highlights both an issue in the Ethic of the Organisation and a conflict with the Ethic of Society. Is it OK to be doing things if people don’t know about them? Is it ethical to be promoting a positive perception of a rapidly developing technology without any exploration or discussion of the negatives?
  3. There was a degree of obfuscation in how the Project Maven deal was put together, with a 3rd party intermediary company actually holding the contract, with Google being a partner, but with contractual terms that restricted any mention of Google’s involvement without Google’s approval. Bluntly, this suggests that there was an awareness in Google that the Ethic of the Organisation and the Ethic of Society might not be aligned in the context of using AI and machine learning technologies to improve the efficiency and effectiveness of Lethal Autonomous Weapons platforms.

The issue of seeking to influence the perception and understanding of technologies at this level is addressed in our E2IM Framework in the context of lobbying and influencing activities by organizations seeking to change the Ethic of Society in terms of how technologies or information management capabilities or applications might be accepted.

It is clear from emails uncovered by The Intercept that Google was concerned that negative media coverage would detract from their efforts to influence thinking and opinion on AI technologies through their democratizing AI.

The juxtaposition of AI as a tool for social change and its use in increasingly targeted military applications clearly didn’t sit well with their own staff and inevitably would not sit well with the Ethic of Society.

Understanding Google's Ethical Frame

Ethics policies in organizations are, like all governance systems, underpinned by a key set of ethical principles, beliefs, and assumptions. To understand Google’s motivation and to assess how effective their ethical awakening will be in the long term, we need to understand what the foundations of their ethical principles are.

Writing on Bloomberg.com, Eric Newcomer tells us that:

“Google is going to try to add up the good and the bad that might come from AI software and act accordingly. The company has discovered utilitarianism”.

O’Keefe and I discuss Utilitarianism briefly in Chapter 2 of  Ethical Data. In short, Utilitarianism can be summarised by the Machiavellian maxim that the “Ends justify the means”.

Google is basing its ethical principles on the assumption that as long as the omelette is tasty, no-one will care about how the eggs came to be produced. We reference this in the book in the context of organizations we have worked with who rationalize the ethical impacts of their analytics processes on the basis that there is a perceived net benefit to society.

A good example of a utilitarian ethic baked into an AI and the potential consequences thereof can be found in the (as yet still fictional) Ultron in the Marvel comic book universe. In the movie Avengers: The Age of Ultron, Tony Stark gives his AI a simple mission: to prevent any future wars. Ultron then analyzes the situation, identifies the root cause of wars, and proceeds to begin eliminating the human race.

By plumping for a Utilitarian foundation on which to ground their ethical principles, Google has moved forward precisely zero steps from their aspirational call to arms, “Don’t be Evil”. To paraphrase Einstein, it looks like they are trying to solve the problem with the same level of thinking that created it in the first place.

And you don’t need to be Einstein to realize that approach never works.

However, the problem goes a little deeper. When we consider the normative frame for business ethics that Google is applying to these issues, there would seem to also be scope for progress as the core thinking does not seem to have evolved within the Ethic of the Organisation.

Google appeared to be happy to pursue military AI research and development projects with the hope of winning a $10 billion contract. They have indicated they will continue to pursue parts of those contracts where they don’t conflict with the high-level utilitarian principles that they have set out.

Google, therefore, are still operating from a Shareholder Theory frame from a business ethics perspective. The driving force for their work is to make money for their shareholders. This is, ultimately, the ends that they are pursuing. It is in this context that decisions on the trade-offs of social value and impact will be made. Their statements that they will continue to pursue parts of these military contracts will inevitably lead to challenges ensuring that the ends achieved, and the means by which they are achieved, are acceptable to the Ethic of the Individuals in the organization and the Ethic of Society.

Will it Work?

The only measure of ethics is action. What is it that Google will do when push comes to shove in an ethical dilemma?

Prior to the publication of these ethical principles for AI, Google has seen staff resign rather than work on research and development projects that they felt were unethical. This is an example of an individual moderator on ethical action and is one of the strategies that people often use to resolve ethical conflicts between their personal ethical frame and the Ethic of the Organisation.

While Google has published a set of high-level principles, it is unclear what form their governance frameworks for ethics will take.

In our book, we describe the relationship between ethics and outcomes in the context of an effective governance framework. These principles need to be underpinned by appropriate policies, procedures, processes, and control checks to ensure that the desired outcomes can be achieved.

Within this, there will be a need for Google to design their organizational structures for ethics in a way that enables appropriate organizational and individual moderators of ethical behaviour to ensure that the expression of the Ethic of the Individual, and the Ethic of Society, can be given an appropriate expression in the ethical actions and outcomes delivered by the organization as a whole (we discuss this in more detail in Chapter 9).

Absent of this level of organizational redesign and governance structures, Google’s Ethical Principles for AI will have no more substance than their ephemeral “Don’t be Evil” promise of times past. If this change does not happen, Google will inevitably be faced with further ethical challenges that will prompt staff to leave.

And therein lies the real cost of failures in Ethical Information Management:

  • Companies lose good and talented staff who go to work for their competitors (or become a competitor).

  • The loss of devil’s advocate voices within the organization skews the discussion and debate about the utility of proposed processing. In a utilitarian ethical frame, the absence of dissenting voices results in an even wider disconnect between the Ethic of the Organisation and the Ethic of Society. If society votes with its wallet, companies with a shareholder theory bias for ethical action will increasingly be dependent on projects to develop questionably ethical applications of technology.

  • Society loses, as the impacts of ethically questionable information management practices will inevitably lead to damage to individuals or to groups in society.

The loss of key staff and the negative publicity that they have garnered has been a small wakeup call for Google, but they need to do more than simply issue some vague principles and aspirations. Time will tell if they are serious about not being evil, or they will simply get better at not being caught being evil.

As Jane Addams famously wrote: “The only measure of ethics is action”.

Related Content

Video
Sustainable Logistics, Risk & Compliance, Risk Management
Video
Sustainable Logistics, Risk Management, Compliance & Regulation


Get tailored expertise every week, plus exclusive content and discounts

For information on how we use your data read our  privacy policy