Artificial Intelligence for Multimodal Image Reconstruction
Recrutement: 
Recrutement en cours/passé: 
Recrutement en cours
Periode: 
2021-2024
Contact: 
bruno.sixou@insa-lyon.fr voichita.maxim@creatis.insa-lyon.fr
Fichiers: 

2 PhD positions are opened in the ANR MultiRecon project, which is a collaboration between the Laboratoire de Traitementde l’Information Médicale/Rennes (LaTIM), Centre de Recherche en Acquisition et Traitement de l’Image pour laSanté/INSA Lyon (CREATIS), Centre Hospitalier Frederic Joliot/Orsay (CHFJ) and the Centre Hospitalier Universitaire in Poitiers.

The PhD students will be recruited respectively by CREATIS and LaTIM.

1. Scientific Context

Medical imaging is the technique of creating a visual representation of the anatomy or of the function of some organs or tissues. The images are obtained by tomographic reconstruction, which is the task of estimating an image from measurement data collected by an imaging system. In computed tomography (CT), the image to reconstruct corresponds to the X-ray attenuation which reflects the proportion of photons interacting with the matter as they pass through the object. In positron emission tomography (PET) imaging, the system detects pairs of γ-photons indirectly emitted from a positron-emitting radiotracer delivered to the patient. The patient dose should be kept to a minimum level. Other medical imaging techniques include for example magnetic resonance imaging (MRI), single photon emission computed tomography (SPECT), and ultrasound.    The collected data can be affected by poor signal-to-noise ratio (SNR) which translates into degraded image quality. The noise regroups the random phenomena that occur during the acquisition. While shorter acquisitions are preferable not only due to time constraints but also to limit the patient dose, they result in lower SNR. The challenge of image reconstruction is therefore to reconstruct an image from a short/low-dose acquisition with acceptable noise.

2. Working hypothesis and aims 

Recent artificial intelligence (AI) techniques for reconstruction have pushed towards less noise: using a training dataset of existing images, (supervised) machine learning (ML) techniques can learn parameters and features that can be used for image reconstruction, with patch-based approaches [1], [2], synthesis filters [3] and analysis filters [4].

    On the other hand, deep leaning has currently gained many breakthroughs in image processing problems such as classification, segmentation and super resolution and it has also been successfully applied in reconstruction problems [5]. Deep learning try to learn a hierarchy of features through multiple stages of learning process. Recently, dictionary learning (DL) techniques have been used to achieve complete image reconstruction using a raw data training dataset [6]. These techniques offer the possibility to further reduce the patient dose and acquisition time without degrading the image quality.

    Learning-based methods are still very new and their utilization are mostly limited to single images. More specifically, they could benefit from the utilization of multimodal data by exploiting the information of the combined modalities [7]. Multimodal imaging plays an important role in accurately identifying the diseased and the normal tissues. CT images are generally used for treatment planning, and MRI provides better soft-tissue definition, while PET images are useful in identifying the disease at a metabolic level even before it is visible on CT or MRI. Multimodal machine learning (MML) aims at building models that can process and relate information from multiple modalities. Learning from multimodal sources offers the possibility of capturing joint information between modalities, thus allowing images to “talk to each other”.

    In this project we will develop new AI reconstruction techniques for multimodal imaging system such as PET/CT and PET/MRI. The hypothesis is that combining the raw data from different modalities with machine learning and deep learning based models can reduce the noise and improve the image quality. More specifically, the tasks are:

  • to develop new multidimensional convolutional dictionary learning models to jointly represent multimodal images;
  • to develop new optimization algorithms for convolutional dictionary learning and image reconstruction;
  • to extend these models to deep learning architectures;
  • to apply these techniques to PET/CT and PET/MRI.

Two PhD subjects are proposed. The first one is focused on the methodological aspects of multimodal reconstruction.The PhD student will work in the Creatis laboratory in Lyon. The second subject will address on applications of the newapproaches to PET/CT and PET/MRI modalities. These applications will be investigated in the LaTIM laboratory.

3. Theses supervision and collaboration

The PhD candidate recruited at CREATIS will be co-supervised by Bruno Sixou and Voichita Maxim (both MDC HDR INSA, CREATIS). The PhD candidate recruited at LaTIM will be supervised by Alexandre Bousse (MDC HDR University of Western Brittany, LaTIM). The  methodological aspects of multimodal image reconstruction will be investigated in Lyon; fine tuning to PET/CT and PET/IRM data will be done at LaTIM. The PhD students will work in close collaboration and we expect that a very stimulating environment will result. A post-doctoral position will be opened at SHFJ Orsay and will further contribute to the success of the project, whose aim is to produce a code working on real data. These data are provided in the project by the partners SHFJ Orsay and CHU Poitiers.

4. Profile Required

We are looking for enthusiastic and autonomous students with strong motivation and interest in multidisciplinary research.

Education: Master in Applied or Pure Mathematics, Computer Science, Signal and Image processing, Biomedical Physics or engineering degree in related fields;

Scientific interests: computer sciences, compressed sensing and machine/deep-learning, medical applications, applied mathematics;

Programming skills: Python highly recommended;

Languages: English required, French optional.

5. Skills developed by the successful candidates during the PhD project

The successful candidates will develop strong skills in machine and deep learning, medical imaging analysis, tomography, inverse problems, optimization. They will also develop general skills in modeling, programming, critical evaluation of results, managing a complex project combining knowledge from different backgrounds. After the thesis they will be able to join research departments in both industry and academia.

6. How to apply?

For more details on the position, please contact bruno.sixou@insa-lyon.fr and voichita.maxim@creatis.insa-lyon.fr with a CV, cover letter and grades in Master, all before upon availability.

Deadline for application is 30th of April 2021.

References

[1] S. Ravishankar and Y. Bresler, “L0 sparsifying transform learning with efficient optimal updates and convergence guarantees,” IEEETransactions on Signal Processing, vol. 63, no. 9, pp. 2389–2404, 2015.

[2] S. Ravishankar, R. R. Nadakuditi, and J. A. Fessler, “Efficient sum of outer products dictionary learning (SOUP-DIL) and its application to inverse problems,” IEEE Transactions on Computational Imaging, vol. 3, no. 4, pp. 694–709, 2017.

[3] I. Y. Chun and J. A. Fessler, “Convolutional dictionary learning: Acceleration and convergence,” IEEE Transactions on Image Processing,vol. 27, no. 4, pp. 1697–1712, 2017.

[4] I. Y. Chun and J. A. Fessler, “Convolutional analysis operator learning: Acceleration and convergence,” IEEE Transactions on ImageProcessing (preprint), 2019.

[5] S. Arridge, P. Maass, O. Oktem, and C. Schoenlieb, “Solving inverse problems using data driven models,” Acta Numerica, pp. 1–174, 2019.

[6] Y. Li, K. Li, C. Zhang, J. Montoya, and G.-H. Chen, “Learning to reconstruct computed tomography (CT) images directly from sinogramdata under a variety of data acquisition conditions,” IEEE Transactions on Medical Imaging, 2019.

[7] J. Huang, Z. Le, Y. Ma, Y. Fan F. Zang, and L. Yang, “Mgmdcgan:medical image fusion using multi-generator multi-discriminator conditional generative adversarial networks,” IEEE Access, pp. 55 147–55 157, 2020.