The ethical skills we are not teaching: An evaluation of university-level courses on artificial intelligence, ethics and society

Printable version

About the project

Canadian universities are not teaching the necessary ethical skills for future workers in the artificial intelligence (AI) sector to safely, productively and effectively engage with automatic decision-making systems (ADMS), i.e., AI and machine learning (ML). This training gap is consistent across all programs evaluated from 16 countries, and risks the future of the burgeoning AI industry in Canada. This gap also exposes future workers to the unintended performance of several harms (bias, discrimination, unfairness, privacy breakings, etc.) to their companies and the humans who use or are the subject of those systems. The identified lack of training also hinders the efforts of our government and civil society to extend Canadian values of equity, diversity and inclusion to the digital realm.

The report reviews 503 courses on AI. Our methodology takes a mixed methods approach in which natural language processing reads a database of texts made up of course descriptions and syllabi, whereas close reading is utilized to set up the background of the evaluation.

To support future workers as they grapple with the ethical complexity around AI issues (including damage to basic human rights), and to provide practical skills that they can use in their working environments (industry, government or nongovernmental organization), we propose to build the regulatory framework to develop these skills around the notion of trustworthiness of ADMS. The report highlights the urgency of a coordinated effort to regulate this training across Canadian universities and colleges through quality assurance / quality improvement (QA/QI) mechanisms.

Key findings

  • According to our textual analysis, universities around the world (503 courses on nonfunctional issues of AI evaluated across 16 countries and 66 universities) are not teaching their students the ethical skills needed to prepare them to effectively and successfully engage with ADMS (i.e., AI and ML), either as developers of software, as managers whose organizations sell products or services with ADMS components or as users of those systems. 
  • In most courses (84.69%), learning outcomes are poorly described and do not differentiate between the knowledge, values and skills students will learn about and/or acquire. The lack of skill descriptions questions the ethical skills and principles students will bring into the job market, and how they will deploy their learning when engaging with ADMS.
  • The “notion of skills” uncovered by our analysis, found within the 503 course descriptions and syllabi, revealed that these notions cover a wide spectrum of terms and concepts. Our analysis demonstrates minimal semantic similarity overlapping across course descriptions within this sample. One can conclude that universities have not yet achieved a common ground of ethical skills for future workers in an AI-based economy that can serve as the standard for the education and training of our students. 
  • We define the “ethical skills” of students and future workers in relationship to the trustworthiness of the ADMS they use and also as the set of learned abilities that will allow workers to perform the ethical actions required to build, safeguard and protect the trustworthiness of these ADMS in the design of the product, the development of the software and the management of the services provided.

Policy implications

  • A regulation of nonfunctional AI courses in universities and colleges through already established QA/QI mechanisms is urgently needed. This regulation would standardize the ethical skills future AI workers across industries, organizations and governments will need to protect their organizations from unintended harm, uphold legal standards related to AI, promote Canadian social values of equity, diversity and inclusion, and stop the chain of bias and discrimination that affects equity-seeking populations and that ADMS tend to perpetuate and amplify.
  • We propose that these basic ethical skills be taught using the notion of “trustworthy ADMS” when training future workers. Trustworthy ADMS are those that foster trust of AI users toward both software products and development methods. ADMS with quality integration of ethical elements such as privacy protection, robustness or security are considered trustworthy. With respect to the software development method, trustworthy ADMS also result from the insertion and evaluation of ethical dispositions as part of the activities of QA during the project’s life cycle.

Further information

Read the full report

Contact the researchers

Juan Luis Suárez, professor, Faculty of Arts and Humanities, Director of the CulturePlex Lab, Western University:

Daniel Varona, B.Sc., PhD candidate, CulturePlex Lab, Western University:

The views expressed in this evidence brief are those of the authors and not those of SSHRC, the Future Skills Centre or the Government of Canada.

Date modified: