2020 Exploration Multidisciplinary Review Panel Co-chairs’ Report


The Multidisciplinary Review Panel co-chairs’ report for the 2020 Exploration competition is submitted for consideration to the NFRF Steering Committee.

Co-chairs

Amira Abdelrasoul
University of Saskatchewan

Kimberly Brewer
Dalhousie University

Chantelle Capicciotti
Queen's University

Marceline Côté
University of Ottawa

Benoit Dostie
HEC Montréal

Emma Duerden
Western University

Leah Hamilton
Mount Royal University

Kevin Hewitt
Dalhousie University

Catherine Mah
Dalhousie University

Julio Mercader
University of Calgary

Josephine Mills
University of Lethbridge

Daniel O'Donnell
University of Lethbridge

Jerry Radziuk
University of Ottawa

Steven Rayan
University of Saskatchewan

André St-Hilaire
Institut national de la recherche scientifique

Cheryl Suzack
University of Toronto

Bilkis Vissandjée
Université de Montréal

Khan Wahid
University of Saskatchewan

Competition process

The New Frontiers in Research Fund (NFRF) Exploration stream is designed to promote high-risk, high-reward and interdisciplinary research.

There were 607 full applications submitted to the competition, of which 214 (35%) were led by Early Career Researchers (ECRs). To be considered ECR-led, both the nominated principal investigator (NPI) and the co-principal investigator (if applicable) have to meet the definition of ECR.

Expert reviews were sought from external reviewers through a double-blind process for which the external reviewers did not have information about the applicants’ identity. External reviewers provided assessments according to the high-risk, high-reward and interdisciplinarity criteria, and the feasibility criterion as it relates to the research plan only. These assessments were shared with the five members of the Multidisciplinary Review Panel assigned to the review of each application. All applications were evaluated according to a seven-point scale for the high-risk (40%), high- reward (40%), and feasibility (20%) criteria, and on a pass/fail basis for the interdisciplinarity/fit to program and EDI criteria.

Applications were then grouped based on the five members’ ratings: 32 were identified to be recommended; 429 were identified to be not recommended; and 146 were identified to be discussed by the Multidisciplinary Review Panel. The 162 panel members discussed these applications at virtual meetings held March 8-12, 2021. Six virtual meeting rooms were set up, each presided over by three co-chairs representing each of the three research funding agencies. Following the discussions, the reviewers were invited to amend their ratings if they wished, based on the deliberations.

At the end of the week of deliberations, the co-chairs for the six meeting rooms met to finalize the recommendations: 117 applications were recommended for funding; 41 of these (35%) are ECR-led.

Observations

At the end of the competition process, a policy discussion was held in each virtual meeting room followed by another discussion held between all the co-chairs. In general, as in other years, members were positive about the program and the overall competition process. The summary of the discussions and suggestions for future considerations are below.

Single-stage adjudication process

  • The Letter of Intent (LOI) stage, introduced in the 2019 competition, was removed for the 2020 competition to reduce the burden on the research community—for both applicants and reviewers. This change meant more full applications needed to be reviewed, requiring more members to be part of the Multidisciplinary Review Panel and a slightly higher workload per member (20-25 applications each) compared to the full application-stage review in the 2019 competition.
  • Some members suggested that the process could be further streamlined by pre-reviewing the EDI and interdisciplinarity criteria of applications, thereby reducing the number of applications requiring in-depth review. However, the co-chairs noted that an integrated review of EDI and interdisciplinarity in parallel with the rest of the proposal is central to the competition and fundamental to the scientific review of the application and research team as a coherent whole.

Double-blind external reviews

  • Members continue to appreciate external reviewer reports, particularly when the proposal lies outside a member’s area of expertise. The double-blind process is seen as valuable and members support its continued use.
  • Members noted that the external reviewers’ assessments of the interdisciplinarity criterion were generally not informative and that many external reviewers did not understand the interdisciplinarity criterion and what constitutes interdisciplinary research from an NFRF perspective. Similarly, members noted that some external reviewers interpreted high-risk as being a negative attribute, resulting in reviews that were counterproductive to the members’ reviews. It was suggested that asking external reviewers to specifically summarize strengths and weaknesses could improve the value of their input for members.
  • The difficulty in securing external reviewers for all applications was also noted. While members indicated that a minimum of two reports per application from individuals who rate their expertise as high is ideal, others suggested that external reports were only critical for applications where there was limited expertise among the members.

Equity, diversity and inclusion (EDI)

  • There has been improvement in the quality of some EDI plans from the 2019 competition. It is clear that some applicants have undertaken training or learning, which is reflected in their solid EDI plans. Applications that did well on the EDI criterion provided specific information on how actions would be implemented and their impact measured.
  • Despite the improvement noted in the quality of some applications, members noted there was continued room for improvement in a large number of them. Applications that failed the EDI criterion and many that had marginal passes (a rating of “mixed” overall) had some common weaknesses. Many were very generic in their EDI statements with little or no action items or outcome measures. Several seemed to be based on institutional EDI plans and lacked contextualization vis-à-vis the specific research team, as well as a demonstration of understanding of structural biases and how such biases could affect a member of an underrepresented group.
  • Members suggested that a restructuring of the EDI section could support the improvement of application quality by helping applicants to more clearly and succinctly articulate their context and EDI plans, which would also make evaluation of this criterion more robust at the competition level. Another suggestion was to require applicants to integrate the EDI plan throughout the proposal, rather than keeping it separate, to ensure applicants can demonstrate an integrated implementation of best practices within the proposed research.
  • Members noted again that an EDI accountability mechanism is necessary for successful applicants, particularly in the reporting phase. Applicants should be required to report on outcomes of their EDI plans to help ensure implementation of practices to support EDI.
  • It was suggested that a non-binary, more nuanced evaluation matrix for the EDI criterion should be considered.

Indigenous research

  • Multidisciplinary Review Panel members support a continued effort by the NFRF program to increase submission of and representation among research proposals led by Indigenous communities and researchers. As was noted in previous competitions, not all projects with Indigenous research components or having an impact on Indigenous communities were identified as having an Indigenous research component by the applicants.
  • It was suggested that measures be taken to highlight and emphasize inclusive Indigenous research, as well as to appeal to Indigenous researchers looking for innovative funding opportunities who may not see their research fitting within the program criteria.

High risk, high reward, feasibility

  • Multidisciplinary Review Panel members who participated in previous competitions had the impression that there were fewer bold ideas presented in the 2020 applications. They noted a greater proportion of poor- quality proposals than in previous competitions. They acknowledged this may be due in part to structural barriers and disparities among the research community as a result of the COVID-19 pandemic, and the single-stage adjudication process which meant that proposals that may have been grouped in a two-stage process were submitted and evaluated at the full application stage.
  • Members found that applications with clear and distinct high-risk and high-reward sections were easier to understand and evaluate. Some members suggested that applicants be required to structure their proposals in a specific and uniform way, with the use of section headings and specified page lengths for each. It was also suggested it be further emphasized to applicants that proposals should be written for a multidisciplinary audience, including both experts and non-experts. Members observed significant use of jargon and acronyms, as well as proposals written “in a vacuum,” which failed to place the research in context.
  • Many applications were overly ambitious in terms of timelines. Exploration grants are two-year grants, and the scope of the projects and expected outcomes should reflect this. Compared to the 2019 competition, fewer applications provided timelines, contingency plans in case of unexpected outcomes, or plans for future development after the two-year timeframe.
  • The continued relatively low number of applications from the social sciences and humanities disciplines relative to the health sciences and natural sciences and engineering was noted by members. It was suggested this might be a reflection of the value and duration of Exploration grants compared to SSHRC’s Insight program, which is open to interdisciplinary research and has a much higher success rate than Exploration competitions. The definitions of high risk and high reward should be revisited to ensure that the language is not inadvertently discouraging applications from these disciplines.

Interdisciplinarity / fit to program

  • There is a need to provide a more detailed explanation regarding the expectations of interdisciplinary research, and what that means for the Exploration stream. Greater emphasis on the difference between a multidisciplinary approach and an interdisciplinary or transdisciplinary approach would help to reduce the number of applications that do not fit the program. Furthermore, greater clarity on the need for team members to bring diverse disciplinary perspectives and expertise is needed. Some applicants consider themselves to be in an interdisciplinary field or to have an interdisciplinary background and, therefore, the proposed project is, by definition, interdisciplinary.
  • Teams that have a history of collaborating need to highlight the novelty of the interdisciplinary approach in the proposal, how it differs from other projects they have worked on, and why the approach is necessary.
  • It was suggested that a non-binary, more nuanced evaluation matrix for the interdisciplinarity criterion should be considered.

Review process and competition meeting

  • While understanding the need for virtual meetings, Multidisciplinary Review Panel members missed the richness and range of discussions of the in-person review meetings. Hearing discussions of applications they were not assigned to helps to set and maintain calibration across the panel. At a minimum, members want the top-rated applications to be discussed. As much as possible, these should be identified and accessible to all members in advance of the meeting (while respecting conflicts of interest) to promote calibration.
  • It was suggested that the entire review process could be double-blind, including evaluation by the members, if the review of the EDI and interdisciplinarity criteria are separated. A risk to this approach is that the EDI plans might not be integrated into the proposal or be reflected in the project plan, in addition to the experience and expertise of the research team members never being considered in the assessment of the feasibility criterion.
  • It was recommended that the page length be increased for French applications.
Date modified: