Skip to main content
Home

Main navigation

  • News
    • All news
    • Seminars
  • Presentation
    • CREATIS
    • Organigram
    • People directory
    • Staff
    • Contacts
    • Access
  • Research
    • Research teams
    • Transversal projects
    • Structuring projects
    • Imaging platform
    • Activity reports
    • Data information note
  • Contributions
    • Publications
    • Patents
    • Software
  • Studies & Training
    • Implications dans les formations
    • Doctoral Studies
  • Jobs Opportunities
  • French French
  • English English
Search API form
User account menu
  • Account
    • Log in

Breadcrumb

  1. Accueil
  2. Job opportunities
  3. Anatomical priors for multi-organ segmentation of 3D medical images using Segment Anything Model (SAM) foundation models: applications to abdominal and dental segmentation

Anatomical priors for multi-organ segmentation of 3D medical images using Segment Anything Model (SAM) foundation models: applications to abdominal and dental segmentation

Theme: medical image segmentation, deep learning, anatomical modeling, frugal AI

Summary: The field of medical image segmentation is currently undergoing a paradigm shift from specialized methods designed for individual tasks to foundation models, the most popular of which is “Segment Anything Models (SAM),” capable of addressing a multitude of segmentation scenarios. Initially designed for natural images, these models are attracting attention for their ability to generalize to the segmentation of objects of unknown classes in unknown domains when properly prompted using prompts specifying the objects of interest or to refine the results of an initial segmentation. These prompts are specified by an operator, often in the form of a box encompassing the target object. Furthermore, the use of SAM models eliminates the need for training, which is very costly in terms of data and computing resources, making their use more economical. Although prompt generation algorithms have been proposed to automate the segmentation process, difficulties remain in applying these models to multi-organ segmentation of 3D medical images. Due to the variability in the shape of anatomical structures and their spatial proximity, the prompt box for one structure often encompasses all or part of an adjacent structure, which is sometimes smaller, making it difficult to segment the latter. Taking into account domain knowledge, such as the spatial relationships between structures (adjacency, orientation, inclusion, etc.), would allow for hierarchical segmentation where key structures are defined and segmented first, facilitating the more precise definition of the bounding boxes of neighboring structures and their subsequent segmentation. Anatomical assumptions can also be taken into account when retraining the model to make it more robust to imprecise prompts. Several strategies can be considered, including integrating anatomical constraints into the loss function or the reward function in a reinforcement learning approach. These strategies are also attractive from a frugality standpoint, as they do not require a complete retraining of the model, but only of its decoder. Moreover, such approaches have low training data requirements, since valid distributions are no longer be deduced from massive data but encoded in the optimized functions.

Tasks: This research internship is a continuation of a research project already begun during the 2024-25 academic year as part of an M1 research internship. The candidate will build on the findings of this work, which focused on the study of SAM models adapted to the analysis of 3D medical images and the automatic generation of prompts via an atlas registration approach. The candidate will explore the state of the art in the introduction of domain knowledge into multi-organ segmentation methods using deep learning, study the different strategies for taking into account or integrating anatomical a priori knowledge during inference by a SAM model or during its retraining, and finally evaluate the different approaches in at least two 3D multi-organ segmentation applications: thoracic-abdominal CT and MRI imaging, and dental CBCT imaging.

Expected results and deliverables: A SAM model enriched with domain knowledge achieving a performance at least comparable to state-of-the-art specialized methods on at least two reference datasets, AMOS22 (thoracic-abdominal CT/MRI) and TootFairy3 (dental CBCT); the production of a conference paper; publication of the code with its documentation.

Required profile: Master's degree (M1) holder or currently enrolled in a Master's program (M2) with a solid background in applied mathematics, image analysis, and computer science, as well as strong programming skills. Theoretical and practical knowledge of deep learning methods is required. Knowledge of medical imaging is a plus.

Application: Send your resume and recent transcripts by email.

Salary: according to current legislation, ~630 €/month

Téléchargements

Type

Master's subject

Statut

Recruitment in progress

Periode

February-July, 2026

Contact

Razmig KÉCHICHIAN (razmig.kechichian@creatis.insa-lyon.fr)

Barre liens pratiques

  • Authentication
  • Intranet
  • Rss feed
  • Creatis on Twitter
  • Webmail
Home

Footer menu

  • Contact
  • Map
  • Newsletter
  • Legal Notices