Context: Deep learning models are now widely used in the field of medical imaging to assist in the diagnosis and detection of diseases. However, these models can inherit biases present in the training data, which may lead to discriminatory diagnostic errors, particularly based on factors such as sex, ethnicity, age, or geographic origin of patients (1).
In medical imaging, the diversity of populations represented in datasets is a critical issue. Biases may arise if the data is unbalanced, for example, if certain populations are underrepresented in the training images. Therefore, it is essential to develop methods to detect and correct these biases to ensure equal access to reliable and accurate diagnoses for all populations, while adhering to ethical principles.
Objectives of the Internship: The main objective of this internship is to explore and propose solutions to reduce prediction biases in deep learning models used in medical imaging, while addressing the ethical issues associated with the implementation of these solutions. More specifically, the internship aims to: Specific objectives include:
- Bias Identification: Study and implement methods to detect biases in the training data and in the predictions generated by the models.
- Bias Correction: Explore techniques to mitigate biases in deep learning models, such as regularization or modifications to optimization objectives (2).
- Fairness Evaluation: Propose evaluation metrics to measure the impact of biases and the fairness of the decisions made by the models (e.g., equality of opportunity, prediction parity).
Skills Required:
- Strong knowledge of machine learning, particularly deep learning.
- Skills in data manipulation, statistical analysis, and programming (Python, libraries such as TensorFlow, PyTorch, Scikit-learn, etc.).
- Interest in social responsibility and issues related to fairness and justice in automated systems.
Compensation: As an academic institution, we offer a standard stipend of approximately 650 euros per month.
Contact: Please send CV and transcripts to carole.frindel(at)creatis.insa-lyon.fr
References :
(1) Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
(2) Seo, S., Lee, J. Y., & Han, B. (2022, June). Information-theoretic bias reduction via causal view of spurious correlation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 2, pp. 2180-2188).