Assessing Race in Clinical Research Models

Key Dates

Assessing Race in Clinical Research Models
RFP posted: December 2022
Letter of intent pre-proposal:
(required via email)
January 23, 2023
Invitations to submit full proposal: January 31, 2023, by 3 p.m. CST
Application deadline (invited participants): March 6, 2023
Peer review: March 2023
Notification of awards: May 2023
Award start date: July 1, 2023


Machine learning and artificial intelligence (AI) will likely continue to be integral in research-derived formulas that predict risk of disease or risk of complications. “Race correction” has been a common practice in the development of these research-derived formulas. The American Heart Association is committed to research that evaluates the role of race in algorithms and risk models and takes a close look at the downstream clinical implications of bias in algorithms on health disparities and mistrust in research.

The purpose of this Request for Proposal (RFP) announcement is to take a fresh look at risk prediction models and algorithms in the field of cardiovascular and stroke science that have been adjusted for race, and reevaluate based on potential downstream inequities in disease outcomes and treatments.

  • This RFP is targeted to early-trainees (pre-docs, post-docs, and fellows).
  • Each early-trainee is required to have a letter of support from a mentor.
  • 1-year awards for $50,000 per year (including 10% indirect costs), with an additional amount of AWS credits to support data analysis on the AHA Precision Medicine Platform (up to $50,000 dollars per year).

Research supported by this RFP includes analysis of epidemiological studies, clinical trial data, de-identified electronic health record data, or other already accumulated datasets. This RFP does not support establishment of new cohorts or collection of new data on existing cohorts.

Who we’re looking for
Our goal is to find trainees who are interested in this area of research that are paired with a mentor with experience in biostatistics, risk prediction models and bias.