An equity lens on artificial intelligence

Printable version

About the project

In Canada and around the world, organizations across many sectors are using artificial intelligence (AI) for a variety of purposes, from hiring employees to assessing different kinds of risk and making investment recommendations. However, it is well known that social relations and contexts are reflected and reproduced in technology, and AI is no exception. Although AI can be used to benefit marginalized groups, research shows that it has the potential to reinforce biases, discrimination and inequities, including in terms of gender, race and class. A concerted focus on equity and fairness in AI by businesses and governments is necessary to mitigate possible harms.

This knowledge synthesis report aims to provide a resource for scholars and practitioners for viewing AI through the lens of equity. This report synthesizes existing research and knowledge about the connection between AI and (in)equity and suggests considerations for public and private sector leaders when implementing AI. In addition to the report, this SSHRC-funded project will also engage graduate business students and the public in knowledge mobilization events to encourage problem-solving around AI and inequity.

Key findings

  • AI is a double-edged sword, with potential to both mitigate and reinforce bias.
  • Because AI uses statistical prediction methods that can be audited, AI has the potential to create outcomes that help groups facing marginalization in situations where human predictions may be clouded by cognitive biases. That is, it can be explicitly programmed to check for and reduce racial, gender or other inequality in predictions and decisions.
  • Despite this potential, and because society’s deeply rooted inequality and inequity are often reflected in technologies, some AI has reinforced marginalization of certain groups, such as women and gender minorities, as well as racialized and low-income communities. AI-powered products and services may use biased data sets that then reproduce this bias; amplify stereotypes and marginalization, sometimes for profit; and/or widen asymmetries of power.
  • Some examples from research include facial identification software that is least effective on racialized women, predictive policing algorithms that result in concentrating policing resources in racialized communities, and credit-scoring algorithms that may underrate creditworthiness of marginalized people.
  • There are also varied potential impacts of AI and automation on jobs and labour. It is possible that women and racialized and low-income groups may be more susceptible to job loss or displacement because of automation across an increasing number of blue-, white- and pink-collar jobs.
  • The reinforcement of inequity and inequality through AI has occurred because of:
    • embedded bias or significant omissions in datasets used for AI;
    • the complexity and trade-offs involved in aligning AI and algorithms with social values such as fairness and equity when profits are also at stake;
    • a lack of transparency from those creating and/or implementing AI, sometimes for reasons such as intellectual property protection;
    • a lack of accountability to the public or other users of AI, often due to a lack of policy or legal obligation to do so; and
    • few perspectives from and limited participation by marginalized and diverse groups in the technology sector.

Policy implications

  • Creators, researchers and implementors of AI can prioritize aligning AI with social values such as fairness, despite trade-offs for efficiency and profit. This entails the need for industry cooperation for safe, responsible and fair AI.
  • Technology companies and governments can prioritize initiatives and funding for equitable and inclusive representation in AI development. Centring perspectives from those of marginalized backgrounds may assist in creating products and services that better serve all communities and create pathways for wider community involvement in technology.
  • Governments can create policies for AI that prioritize accountability and transparency and require organizations to adhere to these principles. The EU has recently undertaken a proposal for the first ever legal framework on AI. Although Canada has made some moves towards policies for AI, effects remain to be seen and regulations could be strengthened.
  • Governments and companies can work towards economic security for workers through attention on reskilling/upskilling programs. These efforts can be especially focused on women and others who have faced the greatest impacts.
  • Academic researchers can deepen knowledge on AI and inequity, such as by continuing cross-disciplinary work on the social, political and environmental impacts of AI, and developing new and different alternatives that prioritize mitigation of harm.

Further information

Read the full report

Contact the researchers

Sarah Kaplan, Director, Institute for Gender and the Economy, Distinguished Professor of Gender and the Economy, and Professor of Strategic Management at the Rotman School of Management, University of Toronto; s.kaplan@rotman.utoronto.ca

Carmina Ravanera, Research Associate, Institute for Gender and the Economy, Rotman School of Management, University of Toronto; carmina.ravanera@rotman.utoronto.ca

The views expressed in this evidence brief are those of the authors and not those of SSHRC, NSERC, CIHR and the Government of Canada

Date modified: