Skip to main content
Home

Main navigation

  • News
    • All news
    • Seminars
  • Presentation
    • CREATIS
    • Organigram
    • People directory
    • Staff
    • Contacts
    • Access
  • Research
    • Research teams
    • Transversal projects
    • Structuring projects
    • Imaging platform
    • Activity reports
    • Data information note
  • Contributions
    • Publications
    • Patents
    • Software
  • Studies & Training
    • Implications dans les formations
    • Doctoral Studies
  • Jobs Opportunities
  • French French
  • English English
Search API form
User account menu
  • Account
    • Log in

Breadcrumb

  1. Accueil
  2. Motion compensated reconstruction using deep learning for computational optics

Motion compensated reconstruction using deep learning for computational optics

The Camille Jordan Institute and the CREATIS laboratory announce the opening of a six-month internship position, starting in March 2023.  An ANR-funded PhD position will be opened in October 2023 to continue on the same topic.

Keywords. Inverse problem, image reconstruction, deep learning, unrolled methods, plug-and-play methods, optimization.

Skills. We are looking for an enthusiastic and autonomous candidate with a strong background in applied mathematics, image processing, or deep learning. The following skills will be acquired during the internship, although prior knowledge on these topics are appreciated:

  • Programming in Python, collaborative development (git and github)
  • Linear algebra and inverse problems (ill-posed problems, regularization)
  • Deep learning (neural network design and optimization, automatic differentiation)
  • Hyperspectral imaging

Detailed description of the intership. Download PDF file here. Check also our website.

Medical background. Fluorescence-guided surgery (FGS) refers to surgical guidance in brain tumor resection, where fluorescence imaging has proven to be efficient for glioma resection, with improved survival rates without recurrence. This technique consists of administration of 5-aminolevulinic acid to the patient, which is a molecule that is absorbed by the tumor cells and metabolized into protoporphyrin IX (PpIX). The PpIX fluorescence signal can be visualized using an intraoperative microscope equipped with a fluorescence module (excitation, 405 nm; emission, 630 nm). While initial studies have shown that only high-grade glioma resection can benefit from FGS, several recent studies have indicated that FGS is also of interest for low-grade gliomas, provided that the full-spectrum information is measured by point probes or multispectral cameras. While this work paves the way to a better determination of the tumor margin during surgery, the latter studies considered point measurements with an external measurement device. It will be highly desirable to perform hyperspectral measurement with the surgery microscope itself, providing the surgeon with real-time imaging rather than a few point measurements. However, a high spectral resolution is needed to distinguish the two states of PpIX.

Preliminary results. In a previous project, we developed a high-spectral-resolution imager that acquires 64 x 64 x 2048 hypercubes in ~10 s. It has a spectral resolution of ~2 nm over a range of about 230 nm, which has been optimized to detect the PpIX fluorescence emission during fluorescence-guided surgery, and a typical spatial resolution of ~200 ยตm. Our acquisition device is computational. Therefore, it requires a reconstruction algorithm to recover the hypercube from the raw data. In a series of works, we have proposed deep-learning reconstruction methods to solve this task.

Challenge. While our reconstruction algorithms allow fast (e.g., hundreds of millisecond) reconstruction, the acquisition of a single hypercube currently takes ~10 s. This may be sufficient for ex-vivo samples; however, in-vivo imaging is subject to physiological motion (e.g., heartbeat and breathing) that occurs at faster rates. Motion of the scene during acquisition creates blur artifacts in the hypercube recovered, if not taken into account. Indeed, when the scene moves rapidly compared to the acquisition time, each of the measurements sees a different scene, while our current algorithms assume that the scene is motionless. This is an issue with our current acquisition device whose acquisition time (e.g., 10 s) typically represents ten cardiac cycles (1 cycle/s).

Work plan. In this internship, we will investigate strategies for the motion-compensated reconstruction of the scene. To do so, we will consider a hybrib acquisition device that also acquires monochrome/RGB images at higher frame rates (e.g., 24 fps). We will assume that the scene motion can be estimated from the monochrone/RGB video flux. In particular, we will consider a real-time motion-compensated method for brain neurosurgery.  Assuming the scene motion is known at each frame, we will investigate efficient methods to reconstruct a motion-compensated hypercube. Firstly, we will solve the problem with no prior knowledge about the solution. We will also consider modern strategies that combine traditional regularization and deep learning. Among them, deep unrolled methods and plug-and-play methods will be investigated. Our algorithms will be evaluated on synthetic videos with a known motion model, on video data sets with known motion models, and on video data sets with no known motion models.

Secondly, the monochrome/RGB acquisition arm provides complementary information compared to the spectral arm (i.e., high-spatial low-spectral resolution vs low-spatial high-spectral resolution). Therefore, we will formalize this as a reconstruction problem, where the hypercube not only satisfies the hyperspectral forward model, but also a monochrome/RGB forward model, up to the noise. This approach was found to be among the more memory-efficient for a similar problem known as pansharpening in the remote-sensing literature.

The successful candidate is expected to contribute to an in-house Python package for image reconstruction. He/she will work in close collaboration with Researchers in biomedical imaging, mathematics and biomedical optics, and will have access to real experimental data.

How to apply? Send CV, motivation letter, and academic records to michael.sdika@creatis.insa-lyon.fr, nicolas.ducros@creatis.insa-lyon.fr and elie.bretin@insa-lyon.fr.

Salary. ~โ‚ฌ580 net monthly.

Barre liens pratiques

  • Authentication
  • Intranet
  • Rss feed
  • Creatis on Twitter
  • Webmail
Home

Footer menu

  • Contact
  • Map
  • Newsletter
  • Legal Notices