2021 Exploration Multidisciplinary Review Panel Co-chairs’ Report


This report from the co-chairs of the New Frontiers in Research Fund (NFRF) Multidisciplinary Review Panel, with regard to the 2021 NFRF Exploration stream competition, is submitted for consideration by the NFRF Steering Committee.

Co-chairs

Name Affiliation
Amilhon, Bénédicte Université de Montréal
Chelli, Mohamed University of Ottawa
D'Iorio, Marie University of Ottawa
Doan, Jon University of Lethbridge
Dogba, Maman Joyce Université Laval
Foruzanmehr, Reza University of Ottawa
Geddes-McAlister, Jennifer University of Guelph
Hamilton, Leah Mount Royal University
Jordan, Steven McGill University
Légaré, François Institut national de la recherche scientifique
MacLeod, Andrea University of Alberta
Martin, Melanie The University of Winnipeg
Mills, Josephine University of Lethbridge
Mulenga, Albert Texas A&M University College of Veterinary Medicine & Biomedical Sciences
Pfeffer, Gerald University of Calgary
Piché, Jean Université de Montréal
Rayan, Steven University of Saskatchewan
Sellam, Adnane Université de Montréal

Competition process

The New Frontiers in Research Fund (NFRF) Exploration stream is designed to promote high-risk, high-reward and interdisciplinary research.

There were 444 full applications submitted to the competition, of which 170 (38%) were led by early career researchers (ECRs). To be considered ECR-led, both the nominated principal investigator (NPI) and the co-principal investigator (if applicable) have to meet the definition of ECR. Expert reviews were sought from external reviewers through a double-blind process for which the external reviewers did not have information about the applicants’ identity. External reviewers provided assessments according to the high-risk, high-reward and feasibility criteria as they relate to the research plan only. These assessments were shared with the five members of the Multidisciplinary Review Panel assigned to the review of each application. All applications were evaluated according to a seven-point scale for the high-risk (40%), high-reward (40%) and feasibility (20%) criteria, and on a pass/fail basis for the interdisciplinarity/fit to program and equity, diversity and inclusion (EDI) criteria.

Applications were then grouped based on the five members’ ratings: 184 were identified to be discussed by the Multidisciplinary Review Panel and 260 were identified as not further considered for funding. The 138 panel members discussed these applications at virtual meetings held March 7 to 11, 2022. Six virtual meeting rooms were set up, each presided over by three rotating co-chairs representing each of the three research funding agencies. Following the discussions, the reviewers were invited to amend their ratings if they wished, based on the deliberations.

At the end of the week of deliberations, the co-chairs for the six meeting rooms met to finalize the recommendations.

Observations

At the end of the competition process, a policy discussion was held with members of the Multidisciplinary Review Panel. Members continue to be positive about the program and the overall competition process. The summary of the discussions and suggestions for future considerations are below.

Equity, diversity and inclusion

  • There continues to be improvement in the quality of some EDI plans. Applicants appear to continue to undertake training or learning, which is reflected in solid EDI plans. Applications that did well on the EDI criterion provided information on their specific context and how actions would be implemented, and their impact measured.
  • Members observed that there is still room for improvement as many applicants were very generic in their EDI statements, with several based largely on institutional EDI plans, despite instructions clearly indicating that these may be drawn upon but must be tailored to the particular context of the fields and teams involved. 
  • Some members echoed the suggestion from 2020 that the EDI plan be integrated throughout the proposal, with the intent of ensuring applicants integrate EDI implementation practices into their research rather than as a separate activity. Members recognize that this approach would require additional guidance for applicants, particularly as it relates to differentiating between EDI considerations for the research environment (team) and EDI considerations for research design (i.e., gender-based analysis plus), which is currently assessed within the feasibility criterion. 
  • Members were asked to comment on a suggestion from the 2020 Multidisciplinary Review Panel to implement a nonbinary, more nuanced evaluation matrix for the EDI criterion (Strong pass / Marginal pass / Fail). While some members supported this approach, feeling that the use of a more granular scale would send a stronger message to applicants, there were a number of opposing viewpoints. Members noted that the pass/fail binary scale sends an important message to the community and do not want proposals with a “marginal pass” to be considered fundable. Some suggested that the use of a more granular scale could be used only to facilitate member discussions with the pass/fail binary scale continuing to be used for applicants, while others felt that this might complicate assessment for members. An interesting suggestion was that, if more nuanced ratings are adopted, the categories should be Strong pass / Marginal fail / Fail, with the rationale that a marginal pass would not foster real change, whereas a marginal fail, considered to be a fail from a program perspective, would inform applicants that they are on the right track.

Interdisciplinarity / fit to program

  • Some members suggested that the interdisciplinarity criterion, including the subcriterion “Fit to program,” be provided dedicated space in the application, similar to the EDI criterion, for applicants to justify how their proposal fits the Exploration stream. Members had a sense that this year, in larger proportions than in past competitions, applications submitted to NFRF could have been funded through other federal agency programs. In many cases, there was a sense that the proposals were recycled after being unsuccessful in other competitions.
  • It was suggested that featuring projects funded through the Exploration stream might be an effective way to further highlight the expectations regarding interdisciplinary research, and the difference between multidisciplinary and interdisciplinary research.
  • Teams that have a history of collaborating should highlight the novelty of the interdisciplinary approach in the proposal, how it differs from other projects they have worked on or are currently working on and why the approach is necessary.

High-risk, high reward, feasibility

  • Members continue to be surprised at the number of applications that do not clearly and distinctly address the high risk and high reward nature of the proposal. It was suggested that the structure of applications be required to be more uniform (use of specific headers in order) and that applicants again be reminded that proposals are to be written in language that can be assessed by a multidisciplinary panel with broad expertise.
  • Some members suggested that applications be expanded to allow more space for applicants to address the elements required to demonstrate the feasibility of the project (e.g., discuss the state of the art, proposed approach).
  • Members provided useful input on the way high reward is described, as it relates to the social sciences and humanities (SSH). Since social relations evolve very slowly, the impact of an innovation (the yield of an innovation resulting from a risky research project) may only be understood many years later. Therefore, it follows that it may be difficult, in the SSH, to argue that a project is both high risk and high reward.

Review process and competition meeting

While understanding the need for virtual meetings and recognizing that they were efficient and pleasant, Multidisciplinary Review Panel members missed the in-person review meetings. Members feel that being able to listen in on discussions of applications they are not assigned to helps to set and maintain calibration across the panels and ensure common understanding of what constitutes “excellence” for each criterion. However, it was also noted that virtual meetings are more accessible to all. A return to in-person meetings would have to enable the participation of some members virtually (a hybrid model) and also be more compact in order to reduce the length.

Date modified: