Organ detection in multi-modality medical images via deep domain adaptation
Recrutement: 
Recrutement en cours/passé: 
Recrutement en cours
Periode: 
2019
Contact: 
Razmig KECHICHIAN (razmig.kechichian@creatis.insa-lyon.fr), Maryam HAMMAMI (maryam.hammami@creatis.insa-lyon.fr)

Context and objectives

Organ detection and localization in medical images are important tasks in both clinical procedures and as an intermediate step in image analysis algorithms, such as image segmentation. Multi-modality methods are of particular interest for robust organ detection in heterogeneous datasets stored in PACS systems of healthcare and medical research centers. Such datasets are often of large size and diverse content challenging the task of efficient organ detection.

We seek a fast multi-modality object detection method capable of localizing up to 2 dozens of thoracic and abdominal organs in 3D radiological images (CT and MRI). Recent deep learning-based object detection methods [2-4] were proven to be very effective in the supervised setting where hundreds of annotated training examples are available for each object class. In medical imaging, such large annotated datasets are rare and annotations are expensive, therefore supervised deep learning methods that estimate millions of deep network parameters would fail.

Data augmentation techniques, both image transformation-based [8,12] and, more recently, GAN (generative adversarial network) -based [9-11] can help alleviate the lack of annotated data by generating additional examples similar to those in available training sets. On the other hand, annotations are often available and more abundant for certain image modalities, such as contrasted CT. Organ detectors learned on these source images could be transferred or adapted to target images, such as MRI, comprising similar anatomies by domain adaptation methods [1]. Existing domain adaptive object detection methods often adapt a learned classification and detection model by fine-tuning deep network parameters such as [5]. Recent adversarial approaches propose particularly interesting alternatives. In [7] for example, a convolutional neural network (CNN) -based detector learned on a source domain is adapted to the target domain through GAN-generated examples resembling the target domain carrying source labels and pseudo labels in the target domain. In [6], the supervised CNN detector is extended via 2 adversarial pathways to tackle image and instance-level shift in the target domain.

The aim of this project is therefore to study and propose an efficient cross-modality organ detection method for medical images capable of adapting supervised detectors learned in a source modality, possibly via data augmentation to counter the lack of annotated data, to a target modality, possibly in an adversarial manner.

 

Required profile

We are looking for a motivated collaborator capable of critical thinking, able to work autonomously as well as in a collective setting, having interest for medical imaging and good sense of responsibility (and humor ;). The candidate should be studying towards completing  a master degree in computer science or a related engineering field. She should have a solid background in applied mathematics, image processing and computer science, in addition to good programming skills, preferably in Python programming language. A working knowledge of deep learning methods is necessary.

 

Application

We encourage interested candidates to send us their résumé accompanied by a cover letter and a transcript of recent grades.

 

Salary

The intern will me remunerated at the rate fixed by law : ~540 € / month.

 

Working language

Communication can take place in French or in English depending on the proficiency of the intern.

 

References

  1. Wang, Mei, and Weihong Deng. "Deep Visual Domain Adaptation: A Survey." Neurocomputing (2018).
  2. Ren, Shaoqing, et al. "Faster R-CNN: towards real-time object detection with region proposal networks." IEEE Transactions on Pattern Analysis & Machine Intelligence 6 (2017): 1137-1149.
  3. Liu, Wei, et al. "SSD: Single shot multibox detector." European conference on computer vision. Springer, Cham, 2016.
  4. Redmon, Joseph, and Ali Farhadi. "YOLO9000: better, faster, stronger." arXiv preprint (2017).
  5. Hoffman, Judy, et al. "LSDA: Large scale detection through adaptation." Advances in Neural Information Processing Systems. 2014.
  6. Chen, Yuhua, et al. "Domain adaptive faster r-cnn for object detection in the wild." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
  7. Inoue, Naoto, et al. "Cross-Domain Weakly-Supervised Object Detection through Progressive Domain Adaptation." arXiv preprint arXiv:1803.11365 (2018).
  8. Çiçek, Özgün, et al. "3D U-Net: learning dense volumetric segmentation from sparse annotation." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2016.
  9. Frid-Adar, Maayan, et al. "GAN-based Synthetic Medical Image Augmentation for increased CNN Performance in Liver Lesion Classification." arXiv preprint arXiv:1803.01229 (2018).
  10. Antoniou, Antreas, Amos Storkey, and Harrison Edwards. "Data augmentation generative adversarial networks." arXiv preprint arXiv:1711.04340 (2017).
  11. Zhang, Xiaofeng, et al. "DADA: Deep Adversarial Data Augmentation for Extremely Low Data Regime Classification." arXiv preprint arXiv:1809.00981 (2018).
  12. Eaton-Rosen, Zach, et al. "Improving Data Augmentation for Medical Image Segmentation." (2018).