Aller au contenu principal
Accueil

Main navigation

  • Actualités
    • Toutes les actualités
    • Séminaires - Soutenances
  • Présentation
    • CREATIS
    • Organigramme
    • Personnels
    • Effectifs
    • Contacts
    • Accès
  • Recherche
    • Equipes de recherche
    • Projets transversaux
    • Projets Structurants
    • Plateformes d'imagerie
    • Rapports d'activités
    • Notes d'information données
  • Contributions
    • Publications
    • Brevets
    • Logiciels
  • Formations
    • Implications dans les formations
    • Ecoles doctorales
  • Emplois et Stages
  • French French
  • English English
Search API form
User account menu
  • Account
    • Se connecter

Fil d'Ariane

  1. Accueil
  2. Artificial Intelligence for Multimodal Image Reconstruction

Artificial Intelligence for Multimodal Image Reconstruction

A PhD position is opened in the ANR MultiRecon project, which is a collaboration between the Laboratoire de Traitement de l’Information Médicale/Brest (LaTIM), Centre de Recherche en Acquisition et Traitement de l’Image pour la Santé/INSA Lyon (CREATIS), Centre Hospitalier Frederic Joliot/Orsay (CHFJ) and the Centre Hospitalier Universitaire in Poitiers.


1. Scientific Context

Medical imaging is the technique of creating a visual representation of the anatomy or of the function of some organs or tissues. The images are obtained by tomographic reconstruction, which is the task of estimating an image from measurement data collected by an imaging system. In computed tomography (CT), the image to reconstruct corresponds to the X-ray attenuation which reflects the proportion of photons interacting with the matter as they pass through the object. In positron emission tomography (PET) imaging, the system detects pairs of γ-photons indirectly emitted from a positron-emitting radiotracer delivered to the patient. The patient dose should be kept to a minimum level. Other medical imaging techniques include for example magnetic resonance imaging (MRI), single photon emission computed tomography (SPECT), and ultrasound.
    The collected data can be affected by poor signal-to-noise ratio (SNR) which translates into degraded image quality. The noise regroups the random phenomena that occur during the acquisition. While shorter acquisitions are preferable not only due to time constraints but also to limit the patient dose, they result in lower SNR. The challenge of image reconstruction is therefore to reconstruct an image from a short/low-dose acquisition with acceptable noise.


2. Working hypothesis and aims 

Recent artificial intelligence (AI) techniques for reconstruction have pushed towards less noise: using a training dataset of existing images, (supervised) machine learning (ML) techniques can learn parameters and features that can be used for image reconstruction, with patch-based approaches [1], [2], synthesis filters [3] and analysis filters [4].

    On the other hand, deep leaning has currently gained many breakthroughs in image processing problems such as classification, segmentation and super resolution and it has also been successfully applied in reconstruction problems [5]. Deep learning try to learn a hierarchy of features through multiple stages of learning process. Recently, dictionary learning (DL) techniques have been used to achieve complete image reconstruction using a raw data training dataset [6]. These techniques offer the possibility to further reduce the patient dose and acquisition time without degrading the image quality.

    Learning-based methods are still very new and their utilization are mostly limited to single images. More specifically, they could benefit from the utilization of multimodal data by exploiting the information of the combined modalities [7]. Multimodal imaging plays an important role in accurately identifying the diseased and the normal tissues. CT images are generally used for treatment planning, and MRI provides better soft-tissue definition, while PET images are useful in identifying the disease at a metabolic level even before it is visible on CT or MRI. Multimodal machine learning (MML) aims at building models that can process and relate information from multiple modalities. Learning from multimodal sources offers the possibility of capturing joint information between modalities, thus allowing images to “talk to each other”.

    In this project we will develop new AI reconstruction techniques for multimodal imaging system such as PET/CT and PET/MRI. The hypothesis is that combining the raw data from different modalities with machine learning and deep learning based models can reduce the noise and improve the image quality. More specifically, the tasks are:

  • to develop new multidimensional convolutional dictionary learning models to jointly represent multimodal images;
  • to develop new optimization algorithms for convolutional dictionary learning and image reconstruction;
  • to extend these models to deep learning architectures;
  • to apply these techniques to PET/CT and PET/MRI.

Two PhD subjects are proposed. The first one is focused on the methodological aspects of multimodal reconstruction.
The PhD student will work in the Creatis laboratory in Lyon. The second subject will address on applications of the new
approaches to PET/CT and PET/MRI modalities. These applications will be investigated in the LaTIM laboratory.

3. Theses supervision and collaboration

The PhD candidate recruited at CREATIS will be co-supervised by Bruno Sixou and Voichita Maxim (both MDC HDR INSA, CREATIS). One PhD candidate will be recruited at LaTIM and will be supervised by Alexandre Bousse (MDC HDR University of Western Brittany, LaTIM). The  methodological aspects of multimodal image reconstruction will be investigated in Lyon; fine tuning to PET/CT and PET/IRM data will be done at LaTIM. The PhD students will work in close collaboration and we expect that a very stimulating environment will result. A post-doctoral position will be opened at SHFJ Orsay and will further contribute to the success of the project, whose aim is to produce a code working on real data. These data are provided in the project by the partners SHFJ Orsay and CHU Poitiers.

4. Profile Required

We are looking for an enthusiastic and autonomous student with strong motivation and interest in multidisciplinary research.

• Education: Master in Applied or Pure Mathematics, Computer Science, Signal and Image processing, Biomedical Physics or engineering degree in related fields;

• Scientific interests: computer sciences, compressed sensing and machine/deep-learning, medical applications, applied mathematics;

• Programming skills: Python highly recommended;

• Languages: English required, French optional.

5. Skills developed by the successful candidate during the PhD project

The successful candidate will develop strong skills in machine and deep learning, medical imaging analysis, tomography, inverse problems, optimization. He/she will also develop general skills in modeling, programming, critical evaluation of results, managing a complex project combining knowledge from different backgrounds. After the thesis PhD holder will be able to join research departments in both industry and academia.


6. How to apply?

For more details on the position, please contact bruno.sixou@insa-lyon.fr, voichita.maxim@creatis.insa-lyon.fr with a CV, cover letter and grades in Master, all before upon availability.

Deadline for application is 30th of Juin 2021.


References

[1] S. Ravishankar and Y. Bresler, “L0 sparsifying transform learning with efficient optimal updates and convergence guarantees,” IEEE
Transactions on Signal Processing, vol. 63, no. 9, pp. 2389–2404, 2015.

[2] S. Ravishankar, R. R. Nadakuditi, and J. A. Fessler, “Efficient sum of outer products dictionary learning (SOUP-DIL) and its application to inverse problems,” IEEE Transactions on Computational Imaging, vol. 3, no. 4, pp. 694–709, 2017.

[3] I. Y. Chun and J. A. Fessler, “Convolutional dictionary learning: Acceleration and convergence,” IEEE Transactions on Image Processing,
vol. 27, no. 4, pp. 1697–1712, 2017.

[4] I. Y. Chun and J. A. Fessler, “Convolutional analysis operator learning: Acceleration and convergence,” IEEE Transactions on Image
Processing (preprint), 2019.

[5] S. Arridge, P. Maass, O. Oktem, and C. Schoenlieb, “Solving inverse problems using data driven models,” Acta Numerica, pp. 1–174, 2019.

[6] Y. Li, K. Li, C. Zhang, J. Montoya, and G.-H. Chen, “Learning to reconstruct computed tomography (CT) images directly from sinogram
data under a variety of data acquisition conditions,” IEEE Transactions on Medical Imaging, 2019.

[7] J. Huang, Z. Le, Y. Ma, Y. Fan F. Zang, and L. Yang, “Mgmdcgan:medical image fusion using multi-generator multi-discriminator conditional generative adversarial networks,” IEEE Access, pp. 55 147–55 157, 2020.

Téléchargements

MultiRecon_Eng.pdf (192.58 Ko)

Type

sujet de thèse

Statut

Recrutement passé

Periode

2021-2024

Contact

bruno.sixou@insa-lyon.fr
voichita.maxim@creatis.insa-lyon.fr

Barre liens pratiques

  • Authentication
  • Intranet
  • Flux rss
  • Creatis sur Twitter
  • Webmail
Accueil

Footer menu

  • Contact
  • Accès
  • Newsletter
  • Mentions Légales