Deep Learning, Multiple Sclerosis, lesions, gadolinium, segmentation, detection, data
Medical Context and Objectives
Gadolinium (Gd) based contrast agents (GBCA) have been used for decades in clinical MRI explorations and especially to probe the blood brain barrier integrity
(using Gd-enhanced T1-weighted MRI) Recently, Gd deposition within human bones and brain tissue was observed following repeated Gd based contrast agents’ injections [8-9]. As the long-term toxicity of Gd accumulation in brain tissue remains unknown, the development of alternative to Gd-enhanced
MRI is crucial to limit the patients’ exposure to such potential toxicity. This point is especially crucial for multiple sclerosis (MS) patients, for whom Gd-enhanced MRI examination is regularly performed for monitoring of the disease course, and is a critical asset for treatment adjustment [6-7].
Figure 1: A T1w image of an MS patient along with the segmented MS lesions.
Our main goal is to make possible the detection on MRI images without Gd injection of the MS lesions that would have shown blood brain barrier disruption on post Gd T1 weighted MRI. The design of an approach based on deep learning able to efficiently detect/segment multiple sclerosis active lesions from MR images acquired before gadolinium injection.
Deep learning for the detection and segmentation of inflammatory lesion without Gd
Our working hypothesis is that active MS lesions (i.e. Gd enhancing) can be distinguished by expert radiologists based on pre-injection conventional MRI only. Deep neural networks will be trained to detect and segment active MS lesions from these conventional images without Gd injection. Two main questions will be tackled during the phd.
Deep neural network for Gd response prediction
The challenge here is to train a sufficiently rich network that can predict Gd response from preinjection images. The difficulty will be to train this network with good generalization properties but with a from a limited amount of data. The problem of Gd response prediction can be seen either as an image translation  problem
(synthetize post Gd image from preinjection images) or as a pure detection or segmentation problem  (predict the probability of Gd enhancement from preinjection images). We will also investigate dedicated data augmentation. We will combine pre-deep learning methods (such as 4) with deep learning approaches such as generative adversarial network (GAN) [14,15] or conditional variational auto encoder (cVAE)  to generate artificial annotated data that can be added to the training dataset to help the training. Multitask learning is also an axes of investigation. We can indeed combine the question of the prediction of Gd response with the question of the segmentation of all the lesions readily visible on preinjection images. Mixing dataset and using a common network that solve several similar questions is often interesting to the resolution of both problems.
Understanding the deep neural network prediction
In this axes, we will tackle the following questions: 1/ Can we quantify the uncertainties on our prediction ? 2/ What are the (combination of) preinjection modalities important for a robust prediction of Gd response. 3/ Can we reveal the pattern in preinjection images that will predict Gd response (positively or negatively). For the first point we will based our work on variational dropout  that allows to estimates uncertainties on the output produced by a deep neural network. Some work proposed on the uncertainty estimation of MS lesion segmentation will be extended to our problem . For question 2 and 3, our investigations will be based on approaches such as thoses described in  or  that allows, from the output of a trained network on a specific images, to identify the voxels that contribute to the classifaction of the images. The main difficulty will be to translate the method described in  (or a similar method) for a classification network to a segmentation network. Data The OFSEP cohort, a large clinical national database composed by more than 1000 clinical standardized MR exams of MS patients including T2/FLAIR MRI, pre and post-Gd T1w MRI will be used. Annotated data (without Gd injection) from public challenges (MICCAI 2008, ISBI 2015, MICCAI 2016) constitute useful datasets.
This work will be done at CREATIS in Lyon (France) in collaboration between the ”NRM and Optics” team which have a strong expertise in MRI acquisition and multiple sclerosis clinical studies and ”Images et Modeles” team which have a strong expertise in medical image processing based on deep learning. The phd student will be supervized by:
• F. Cotton, neuroradiologist, multiple sclerosis specialist, NRM and Optics team
• M. Sdika, deep learning and image processing specialist, Images et Modeles team
• T. Grenier, deep learning and image processing specialist, Images and Modeles team
Candidate & Application
The candidate is expected to have a M2 in either machine learning, image processing or applied mathematics. We are seeking a serious candidate who can work semi-autonomously with:
• strong programming skills, including experience with python
• good knowledge of machine learning, deep learning
• knowledge of image processing (image segmentation, registration and warping)
• methods, writing ability
The successful candidate is expected to be autonomous and show strong motivation and interest in multidisciplinary research. He/She will need to acquire a deep understanding on the question and issues related to MS especially to inflammatory lesions process. Interested candidates will send a cover letter, a CV, transcripts of M1 and M2 to: michael[dot]sdika[at]creatis[dot]insa-lyon[dot]fr francois[dot]cotton[at]chu-lyon[dot]fr. Other relevant documents such as letters of reference,
previous internship reports, code sample,... will be appreciated. Please note that the doctoral school requires an overall passing grade higher than or equal to 12/20
at the first session of Master 2 or an equivalent diploma.
For the recuited applicant, this phD will open opportunities in academic research or as an expert
in deep learning and medical imaging in the industry.
Application should be sent before May 15th 2019.
1. Pierre-Antoine Ganaye, Michaël Sdika, and Hugues Benoit-Cattin. Semi-supervised learning
forsegmentation under semantic constraint. InMICCAI, Grenada, Spain, September 2018a. URL
2. Pierre-Antoine Ganaye, Michael Sdika, and Hugues Benoit-Cattin. Towards Integrating Spatial
Local-ization in Convolutional Neural Networks for Brain Image Segmentation. In2018 IEEE
15th Inter-national Symposium on Biomedical Imaging (ISBI 2018), Washington, United States,
April 2018b.IEEE. doi: 10.1109/ISBI.2018.8363652. URLhttps://hal.archives-ouvertes.fr/hal-
3. Michaël Sdika. Enhancing atlas based segmentation with multiclass linear classifiers. Medi-
cal Physics, 42:7169, 2015. doi: 10.1118/1.4935946. URL https://hal.archives-ouvertes.fr/hal-
4. Michaël Sdika and Daniel Pelletier. Nonrigid registration of multiple sclerosis brain images using
lesioninpainting for morphometry or lesion mapping.Human Brain Mapping, 30:1060–7, April
2009. doi:10.1002/hbm.20566. URL https://hal.archives-ouvertes.fr/hal-01902507.
5. P.A. Gourraud, Michaël Sdika, P. Khankhanian, R.G. Henry, A. Beheshtian, P.M. Matthews,
S.L. Hauser, J.R. Oksenberg, D. Pelletier, and S.E. Baranzini. A genome-wide association
study of brain lesion distribution in multiple sclerosis. Brain, 136:1012–24, April 2013. doi:
10.1093/brain/aws363. URL https://hal.archives-ouvertes.fr/hal-01902499. sion mapping. Hu-
man Brain Mapping 30: 1060–1067. doi:10.1002/hbm.20566.
6. Sormani, M., Rio, J., Tintorè, M., Signori, A., Li, D., Cornelisse, P., Stubinski, B., Stromillo, M.,
Montalban, X., and De S tefano, N. 2013. Scoring eatment response in patients with relapsing
multiple sclerosis. Mult Scler 19: 605–612. doi:10.1177/1352458512460605.
7. Cotton, F., Kremer, S., Hannoun, S., Vukusic, S., and Dousset, V. 2015. OFSEP, a nation-
wide cohort of people with multiple sclerosis: Consensus minimal MRI protocol. Journal of
Neuroradiology 42: 133–140. doi:10.1016/j.neurad.2014.12.001.
8. Kanda, T., Ishii, K., Kawaguchi, H., Kitajima, K., and Takenaka, D. 2013. High Signal Intensity
in the Dentate Nucleus and Globus Pallidus on Unenhanced T1-weighted MR Images: Relation-
ship with Increasing Cumulative Dose of a Gadolinium-based Contrast Material. Radiology 270:
9. Layne, K.A., Dargan, P.I., Archer, J.R.H., and Wood, D.M. 2018. Gadolinium deposition and the
potential for toxicological sequelae - A literature review of issues surrounding gadolinium-based
contrast agents. Br J Clin Pharmacol 84: 2522–2534. doi:10.1111/bcp.13718.
10. S Lapuschkin, S Wäldchen, A Binder, G Montavon, W Samek, KR Müller. Unmasking Clever
Hans Predictors and Assessing What Machines Really Learn Nature Communications, 10:1096,
11. G Montavon, W Samek, KR Müller. Methods for Interpreting and Understanding Deep Neural
Networks. Digital Signal Processing, 73:1-15, 2018
12. Nair, Tanya, et al. ”Exploring uncertainty measures in deep networks for multiple sclerosis
lesion detection and segmentation.” International Conference on Medical Image Computing and
Computer-Assisted Intervention. Springer, Cham, 2018.
13. Gal, Y., Ghahramani, Z.: Dropout as a bayesian approximation: representing model uncertainty
in deep learning. In: ICML, pp. 1050–1059 (2016)
14. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A.
and Bengio, Y., 2014. Generative adversarial nets. In Advances in neural information processing
systems (pp. 2672-2680).
15. Mirza, M. and Osindero, S., 2014. Conditional generative adversarial nets. arXiv preprint
16. Kingma, D.P. and Welling, M., 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
17. Zhang, Y., Cheng, J.-Z., Xiang, L., Yap, P.-T., and Shen, D. 2018. Dual-Domain Cascaded
Regression for Synthesizing 7T from 3T MRI. Pages 410–417 in International Conference on
Medical Image Computing and Computer-Assisted Intervention. Springer.
18. Valverde, Sergi, et al. ”Improving automated multiple sclerosis lesion segmentation with a cas-
caded 3D convolutional neural network approach.” NeuroImage 155 (2017): 159-168.