Annika Abraham | Polygence
Symposium presenter banner

Symposium

Of Rising Scholars

Fall 2025

Annika will be presenting at The Symposium of Rising Scholars on Saturday, September 27th! To attend the event and see Annika's presentation.

Go to Polygence Scholars page
Annika Abraham's cover illustration
Polygence Scholar2025
Annika Abraham's profile

Annika Abraham

Class of 2026Sunnyvale, California

About

Projects

  • "Bias and Fairness Evaluation in Predictive Models of Recidivism: A Comparison of XGBoost, Logistic Regression, and Random Forest" with mentor Luyuan Jennifer (Working project)

Project Portfolio

Bias and Fairness Evaluation in Predictive Models of Recidivism: A Comparison of XGBoost, Logistic Regression, and Random Forest

Started July 9, 2024

Abstract or project description

As machine learning models are increasingly used and depended on in high-stakes domains like criminal justice, concerns over their fairness across demographic groups have become a rising issue. This study evaluates the fairness and performance of three machine learning models (Random Forest, XGBoost, and Logistic Regression) using data from the Georgia Department of Corrections. The dataset contains detailed demographic and criminal history records, and tracked recidivism across a three‐year post-release period. 

The models were evaluated using performance and fairness metrics including demographic parity, equalized odds, and disparate impact. Fairness checks were conducted across racial, gender, and age-based subgroups. SHAP (SHapley Additive exPlanations) values were used to highlight feature importance specific to subgroups. Additional tools, including ROC curves, calibration plots, and confusion matrix heatmaps, were created to evaluate model performance and subgroup-level differences in predictive behavior.

Although this analysis focuses on exposing potential biases rather than mitigating them, preliminary findings indicate disparities in both predictive performance and fairness metrics across demographic groups. This study highlights the risk of perpetuating biases through using such models in real-world justice systems. Future work will explore fairness mitigation strategies to reduce these disparities and support more equitable algorithmic decision-making.