Securing Public Trust in AI: A Question of Policy or Social Licence?

By Ted Hewitt


This article was originally published in The Globe and Mail on May 5, 2019.


Following the recent accidents involving the Boeing 737 MAX 8 airplane, are you starting to think carefully about the type of aircraft you will be flying in during your next trip? And how would you feel about your children riding to school in a driverless car or school bus after the recent spate of accidents involving autonomous vehicles? Would you trust a data-driven medical diagnosis and prescription drug regime as much as the opinion of your family doctor?

Discussions and debates about artificial intelligence (AI) abound today. Developers and tech companies applaud advances, and naysayers warn about humanity’s potential destruction by super-intelligent machines. But we are not paying enough attention to how AI’s many applications are being marketed to you and me, the intended beneficiaries. What will it take for us to buy into the legitimacy of the complex systems contained in black boxes that we do not understand, and that companies often say is proprietary information anyway? This is not simply a question of government policy or what is allowed under current or planned laws.

Previously, many new technologies, such as home appliances and smartphones, have been sold on the value proposition of saving us time and effort. To a large extent, society has embraced this logic. However, AI differs significantly from earlier technologies. That’s because, in many cases, humans are not in control, and machines are doing the thinking. Algorithms are influencing decisions in every aspect of our lives: shopping, banking, hiring, working, dating, policing, education, health care, transportation and more. These developments are being presented as extending far beyond saving time or labour—ultimately to the economization of thinking itself. In the public imagination, however, reducing human input in decision-making may be less popular than increased productivity and free time.

So we must better understand the public appetite and expectations for AI and investigate and define the boundaries of its social licence (that is, public acceptance or support for projects or practices). Companies or governments cannot give themselves social licence. Evolution, distribution and adoption of a technology or application must be based on sufficient legitimacy, accountability, acceptance and trust, along with the informed consent of those most affected people. Look at what happened to energy companies such as Shell in Africa and BP in the Gulf of Mexico when they lost social licence to operate. The public backlash in Europe against genetically modified organisms (GMO) crops is another sobering example of the effects of an absence of social licence.

Research on social licence has predominantly been done in the mining, forestry, energy and other natural resources industries. It’s now time to grapple with these issues in the wide-reaching realm of AI. Some studies in the social sciences and humanities are already exploring privacy issues around the data gathered by AI applications, particularly in health. Others are looking at ethical and regulatory frameworks, particularly for autonomous military robots, driverless vehicles and social robots. Still others have identified programmed discrimination issues tied to inherent biases of programmers themselves: voice-recognition systems that do not properly decode speech from females or non-native language speakers; photo-tagging software that cross-categorize some humans and animals; and aircraft guidance software systems that reflect assumptions of when aircraft are in a stall condition (as demonstrated by the Boeing 737 MAX 8).

Other reports have highlighted how automated placement of advertising and other information, through otherwise innocuous invitations on websites, can take users into fake news traps. Similarly, proposed links to additional content on a popular video streaming site can lead to worm-holing, subtly pulling unsuspecting viewers into the abyss of extremist groups. In both cases, research shows that this not only puts individuals at risk, but also potential democracy itself by relativizing the truth, exaggerating threats or falsely identifying perceived enemies of the people.

AI technological advances are moving at a brisk pace. Ethical frameworks, such as the Montréal Declaration, are being developed to guide AI uses. We now need sound research on how humans react to or adopt machine-driven environments. What’s required to establish public trust and legitimacy for AI applications and solutions? What’s an acceptable level of risk? What criteria influence which decisions we delegate to machines? What level of human oversight are we comfortable with and in which situations? How much of the black box workings do we need to understand to trust it?

Canada is already a leader in AI scientific research, signalled strongly by the recent awarding of the prestigious Turing Award to Yoshua Bengio, Geoff Hinton and Yann LeCun for pioneering work on deep learning. It’s now time to step up interdisciplinary research on social acceptability and the dimensions of the social licence needed to inform the responsible and ethical design of AI systems. After all, AI is as much about its impacts on society and the relationship between technology and humanity as it is about the science itself.