Time to Change Course in AI and Technology Design

By Ted Hewitt, President, Social Sciences and Humanities Research Council

Back in 1939, in a carefully crafted typewritten letter prepared with colleague Leo Szilard, world-famous physicist Albert Einstein wrote to then-President Franklin Delano Roosevelt warning him of the power and potential dangers of research being conducted at the time on nuclear fission. This letter, as well as Roosevelt’s response—now widely accessible on the internet—serves a reminder to all of us of both the potential and the unintended consequences of scientific advance.

It is generally conceded that Roosevelt and other world leaders largely failed to heed Einstein’s warning. The consequences were plainly in evidence by 1945 with the bombings in Hiroshima and Nagasaki. In fact, until today, in the hands of the so-called superpowers, nuclear weapons continue to serve as both the primary driver toward, and principal deterrent to planetary destruction. At the same time, with foresight, the application of safeguards and most importantly of all, government regulation, nuclear fission technology has provided an important source of power in many countries around the world with less ready access to other forms of clean energy.

All of this points to the simple fact that in the hands of human beings, science and technological advance are not neutral. While science can certainly answer important questions regarding the “what” of invention, it cannot and doesn’t answer questions of “whether”, or “if” associated with technological implementation, nor can it predict “how” societies will eventually use or react to it. Human beings drive and guide technological implementation, and are the first to feel its effects.

Over 80 years on, we find ourselves in a similar, potentially destructive situation. With the rapid onset of technologies associated with artificial intelligence (AI), both the benefits and drawbacks of AI are becoming increasingly apparent. Here again, the technology has largely outpaced society’s ability to absorb it, or even to protect itself from the obvious harms AI presents through potential job loss, manipulation of information, or fraud, in the interest of both domestic and international actors. And here again, some of the primary architects and early adopters of AI technology, within and beyond the academy, are sounding the alarm.

Within Canada, both Yoshua Bengio and Geoffrey Hinton—both prominent researchers and leading lights of the movement—have publicly decried the pace of AI development, the pervasiveness of its use, and the potential impact of AI adoption on society, and the lack of regulation surrounding it. In an open letter released in March 2023, signed by over 1300 individuals, including Yoshua Bengio, tech leaders loudly proclaimed that “AI systems with human-competitive intelligence can pose profound risks to society and humanity.” They called on AI developers to work with policymakers to dramatically accelerate development of robust AI governance systems, including public funding for technical AI safety research and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Governments are also now increasingly worried, as seen in US President Biden’s recent request to meet with the CEO’s of America’s largest AI tech firms.

But this is not 1939, or 1945. In the interim, we have seen the development and expansion of the global research enterprise across the disciplinary spectrum, particularly in the growth of domains associated with the social sciences and humanities—those disciplines most directly linked to our fundamental understanding of human thought and behaviour. In 2012 the role of social sciences and humanities to address the impact of emerging technologies was identified in an Imagining Canada’s Future foresight exercise as a critical factor to mitigate global challenges deemed to impact Canada in significant ways in the forthcoming decade. With the pace of technological change now occurring, particularly as related to new AI and quantum technologies, we have never needed them more than now. But what we also need to do most right now is to listen, and to learn —all towards the development of societal awareness of the impact of these technologies and towards informing the regulation that will be required to make best use of them and to keep us safe. It’s not too late.

For a number of years, Canada’s premier funder of social sciences and humanities research has been supporting a number of projects that can help guide us in this respect – from Teresa Scassa at the University of Ottawa, working on artificial intelligence and the law, data governance, data privacy and legal dimensions of data scraping, to Eran Tal at McGill University who works to address the ethical and social implications of big data and machine learning algorithms, and Benoit Dupont at the Université de Montréal, who is conducting research that will help businesses and parapublic organizations acquire resilience metrics to properly prepare for cyber attacks.

Science, technology, and the manner of its implementation need to be seen as complementary and integrated processes. This means working together at the very beginning of technology development to ensure the careful integration of scientific advance with the manner of its implementation, its likely consequences, and the legislative and jurisdictional content it requires. This means we need to harness the power and creativity of our entire research enterprise, from natural sciences and engineering, to health and social sciences and humanities. More than ever, our present and our future utterly depend on it.

Date modified: