From RAndosca at mednet.ucla.edu Mon Apr 14 17:58:50 2025 From: RAndosca at mednet.ucla.edu (Andosca, Ryan) Date: Mon, 14 Apr 2025 15:58:50 +0000 Subject: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction Message-ID: Hi, So, my experience with coding, especially in C, is extremely limited. I mostly work in MATLAB, and as such I've been primarily using the built-in console applications to perform reconstruction on CBCT datasets. The rtksart application works great and provides a beautiful reconstruction. My goal is to effectively do the same reconstruction, but tack on motion compensation to each iteration. After delving into the algorithm implementation in rtksart as much as I am able, I wrote a MATLAB workflow that boils down to: * Start with image of zeros * Randomize gantry angle order * Iterate over gantry angles: * Load current projection forward and backward motion compensation DVFs * Deform current image iteration to current gantry angle breathing state * Forward project deformed image * Subtract deformed projections from raw CBCT image data * Back project the subtraction image * NORMALIZE BACK PROJECTION (this is where I think my issue is) * Deform normalized back projection back to the original image space * Update current image: img = img + lambda * (normalized_back_projection / number_of_gantry_angles) * Note: This step, as it also contains normalization, may also be a source of my issue... * Set negatives to 0 * Repeat I get an image that is much as I expect - a much sharper diaphragm and generally higher definition in soft tissue structures. So I must be generally on the right track. The issue is that the normalization is clearly off in some way. I have a bright circle artifact in the middle of the image and some striping throughout. Try as I might to understand it, I couldn't discern how normalization was done in the rtksart application. So, this is the method I used: * Create an image of 1s of the size of the final (SART reconstructed) image. * Forward project through this image of 1s * Back project through the resulting forward projection * In the normalization step in the MCSART process detailed above, I simply divide by this resulting back projection. Knowing all of that, any idea what my issue might be leading to my artifacts? Ultimately, I know it would be more ideal if I just altered the rtksart application itself to include motion compensation, and then I'd just be using the built-in normalization that clearly works, but I find my abilities lacking when I attempt to do this... So instead I am just seeking to understand how that application implements normalization, so that I can use a similar normalization in my algorithm! I greatly appreciate any help provided! Best, R. Andosca GSR UCLA Health ________________________________ UCLA HEALTH SCIENCES IMPORTANT WARNING: This email (and any attachments) is only intended for the use of the person or entity to which it is addressed, and may contain information that is privileged and confidential. You, the recipient, are obligated to maintain it in a safe, secure and confidential manner. Unauthorized redisclosure or failure to maintain confidentiality may subject you to federal and state penalties. If you are not the intended recipient, please immediately notify us by return email, and delete this message from your computer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.rit at creatis.insa-lyon.fr Thu Apr 17 16:48:20 2025 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 17 Apr 2025 16:48:20 +0200 Subject: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction In-Reply-To: References: Message-ID: Hi Ryan, If you check our code, the normalization step is quite simplistic to avoid a backprojection: we simply calculate and divide by the length of the intersection between each ray and the "image box" (parallelepiped corresponding to the space occupied by the volume) using the rtk::RayBoxIntersectionImageFilter . This is exactly what you should get if you forward project a volume of 1 with our Joseph forward projectors. See the diagram from the implementation of SART: https://www.openrtk.org/Doxygen/classrtk_1_1SARTConeBeamReconstructionFilter.html Maybe you can try to do the same here? For what you want to do, it would be better to use a forward and backprojector combining the projection and a vector field... We have it available for the backprojection and for the forward , maybe we should try to implement it in the SART command line application for you... I have opened an issue to track this: https://github.com/RTKConsortium/RTK/issues/721 Cheers, Simon On Mon, Apr 14, 2025 at 6:25?PM Andosca, Ryan wrote: > Hi, > > So, my experience with coding, especially in C, is extremely limited. I > mostly work in MATLAB, and as such I've been primarily using the built-in > console applications to perform reconstruction on CBCT datasets. The > rtksart application works great and provides a beautiful reconstruction. My > goal is to effectively do the same reconstruction, but tack on motion > compensation to each iteration. > > After delving into the algorithm implementation in rtksart as much as I am > able, I wrote a MATLAB workflow that boils down to: > > - Start with image of zeros > - Randomize gantry angle order > - Iterate over gantry angles: > - Load current projection forward and backward motion compensation DVFs > - Deform current image iteration to current gantry angle breathing > state > - Forward project deformed image > - Subtract deformed projections from raw CBCT image data > - Back project the subtraction image > - NORMALIZE BACK PROJECTION (this is where I think my issue is) > - Deform normalized back projection back to the original image space > - Update current image: img = img + lambda * > (normalized_back_projection / number_of_gantry_angles) > - Note: This step, as it also contains normalization, may also be a > source of my issue... > - Set negatives to 0 > - Repeat > > > I get an image that is much as I expect - a much sharper diaphragm and > generally higher definition in soft tissue structures. So I must be > generally on the right track. The issue is that the normalization is > clearly off in some way. I have a bright circle artifact in the middle of > the image and some striping throughout. Try as I might to understand it, I > couldn't discern how normalization was done in the rtksart application. So, > this is the method I used: > > - Create an image of 1s of the size of the final (SART reconstructed) > image. > - Forward project through this image of 1s > - Back project through the resulting forward projection > - In the normalization step in the MCSART process detailed above, I > simply divide by this resulting back projection. > > > Knowing all of that, any idea what my issue might be leading to my > artifacts? Ultimately, I know it would be more ideal if I just altered the > rtksart application itself to include motion compensation, and then I'd > just be using the built-in normalization that clearly works, but I find my > abilities lacking when I attempt to do this... So instead I am just seeking > to understand how that application implements normalization, so that I can > use a similar normalization in my algorithm! > > I greatly appreciate any help provided! > > Best, > R. Andosca > GSR > UCLA Health > > > > > > ------------------------------ > > UCLA HEALTH SCIENCES IMPORTANT WARNING: This email (and any attachments) > is only intended for the use of the person or entity to which it is > addressed, and may contain information that is privileged and confidential. > You, the recipient, are obligated to maintain it in a safe, secure and > confidential manner. Unauthorized redisclosure or failure to maintain > confidentiality may subject you to federal and state penalties. If you are > not the intended recipient, please immediately notify us by return email, > and delete this message from your computer. > _______________________________________________ > Rtk-users mailing list > rtk-users at openrtk.org > https://www.creatis.insa-lyon.fr/mailman/listinfo/rtk-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From RAndosca at mednet.ucla.edu Thu Apr 17 17:55:04 2025 From: RAndosca at mednet.ucla.edu (Andosca, Ryan) Date: Thu, 17 Apr 2025 15:55:04 +0000 Subject: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction In-Reply-To: References: Message-ID: Hi Simon, Thank you so much for your reply! If I understand correctly, I just need to divide the forward projection step by a forward projection of ones (using your Joseph forward projector, which I do), and if I use voxel-wise back projection I needn't apply a secondary normalization to the back projection step? Is there a downside, maybe computationally, to this implementation? And if I do use Joseph back projection, do I normalize by dividing by a Joseph back projection of ones through the current projection's geometry? Apologies for the flurry of questions, I'm quite new to this and doing my best to learn as I go! If you are able to implement a command line application that utilizes the warp projectors, that would be simply amazing for me! My Frankenstein implementation is a bit of a mess, after all. I greatly appreciate all of your help! Best, Ryan ________________________________ From: Simon Rit Sent: Thursday, April 17, 2025 7:48 AM To: Andosca, Ryan Cc: rtk-users at openrtk.org Subject: Re: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction Hi Ryan, If you check our code, the normalization step is quite simplistic to avoid a backprojection: we simply calculate and divide by the length of the intersection between each ray and the "image box" (parallelepiped corresponding Hi Ryan, If you check our code, the normalization step is quite simplistic to avoid a backprojection: we simply calculate and divide by the length of the intersection between each ray and the "image box" (parallelepiped corresponding to the space occupied by the volume) using the rtk::RayBoxIntersectionImageFilter. This is exactly what you should get if you forward project a volume of 1 with our Joseph forward projectors. See the diagram from the implementation of SART: https://www.openrtk.org/Doxygen/classrtk_1_1SARTConeBeamReconstructionFilter.html Maybe you can try to do the same here? For what you want to do, it would be better to use a forward and backprojector combining the projection and a vector field... We have it available for the backprojection and for the forward, maybe we should try to implement it in the SART command line application for you... I have opened an issue to track this: https://github.com/RTKConsortium/RTK/issues/721 Cheers, Simon On Mon, Apr 14, 2025 at 6:25?PM Andosca, Ryan > wrote: Hi, So, my experience with coding, especially in C, is extremely limited. I mostly work in MATLAB, and as such I've been primarily using the built-in console applications to perform reconstruction on CBCT datasets. The rtksart application works great and provides a beautiful reconstruction. My goal is to effectively do the same reconstruction, but tack on motion compensation to each iteration. After delving into the algorithm implementation in rtksart as much as I am able, I wrote a MATLAB workflow that boils down to: * Start with image of zeros * Randomize gantry angle order * Iterate over gantry angles: * Load current projection forward and backward motion compensation DVFs * Deform current image iteration to current gantry angle breathing state * Forward project deformed image * Subtract deformed projections from raw CBCT image data * Back project the subtraction image * NORMALIZE BACK PROJECTION (this is where I think my issue is) * Deform normalized back projection back to the original image space * Update current image: img = img + lambda * (normalized_back_projection / number_of_gantry_angles) * Note: This step, as it also contains normalization, may also be a source of my issue... * Set negatives to 0 * Repeat I get an image that is much as I expect - a much sharper diaphragm and generally higher definition in soft tissue structures. So I must be generally on the right track. The issue is that the normalization is clearly off in some way. I have a bright circle artifact in the middle of the image and some striping throughout. Try as I might to understand it, I couldn't discern how normalization was done in the rtksart application. So, this is the method I used: * Create an image of 1s of the size of the final (SART reconstructed) image. * Forward project through this image of 1s * Back project through the resulting forward projection * In the normalization step in the MCSART process detailed above, I simply divide by this resulting back projection. Knowing all of that, any idea what my issue might be leading to my artifacts? Ultimately, I know it would be more ideal if I just altered the rtksart application itself to include motion compensation, and then I'd just be using the built-in normalization that clearly works, but I find my abilities lacking when I attempt to do this... So instead I am just seeking to understand how that application implements normalization, so that I can use a similar normalization in my algorithm! I greatly appreciate any help provided! Best, R. Andosca GSR UCLA Health ________________________________ UCLA HEALTH SCIENCES IMPORTANT WARNING: This email (and any attachments) is only intended for the use of the person or entity to which it is addressed, and may contain information that is privileged and confidential. You, the recipient, are obligated to maintain it in a safe, secure and confidential manner. Unauthorized redisclosure or failure to maintain confidentiality may subject you to federal and state penalties. If you are not the intended recipient, please immediately notify us by return email, and delete this message from your computer. _______________________________________________ Rtk-users mailing list rtk-users at openrtk.org https://www.creatis.insa-lyon.fr/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From RAndosca at mednet.ucla.edu Thu Apr 17 22:34:04 2025 From: RAndosca at mednet.ucla.edu (Andosca, Ryan) Date: Thu, 17 Apr 2025 20:34:04 +0000 Subject: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction In-Reply-To: References: Message-ID: As an update, I swapped to the voxel-based back projection (from CudaRaycast) and normalized in just one step by dividing the forward projection by a forward projection of ones. The image quality did improve a little, but the main artifact of concern is still present. To describe the artifact better, it's a bright cylinder near the center of the patient. I'm now wondering if this may be a geometry error of some kind rather than normalization... No such artifact exists when I use the rtksart application, which uses the same geometry file I use in my MCSART MATLAB implementation. But there are other areas where geometry artifacts might originate, like improperly updating file headers etc... Since my coding is very clunky, my code involves a lot of saving out variables, sending them to RTK applications, then loading the results back into MATLAB for deformation. Not exactly streamlined, I know. But perhaps some header information isn't being sent to the RTK applications that should be at some point, or the header information is off slightly somewhere. I'll attach an image of the artifact in question to this email, though I'm not sure it will appear on the message board. In the images, left is the rtksart output and right is my MCSART result (after one iteration). Also notice that the SART image appears sharper than MCSART - I can't be sure if that's a function of noise or what, but that difference may also speak to the issue for all I know... That diaphragm sure gets sharper though, so motion compensation is at least doing what it should be doing. [cid:02744979-c088-4d07-9d35-b82eda152616] [cid:9140b673-60d9-4283-9760-e149a32c3a2e] ________________________________ From: Andosca, Ryan Sent: Thursday, April 17, 2025 8:55 AM To: Simon Rit Cc: rtk-users at openrtk.org Subject: Re: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction Hi Simon, Thank you so much for your reply! If I understand correctly, I just need to divide the forward projection step by a forward projection of ones (using your Joseph forward projector, which I do), and if I use voxel-wise back projection I needn't apply a secondary normalization to the back projection step? Is there a downside, maybe computationally, to this implementation? And if I do use Joseph back projection, do I normalize by dividing by a Joseph back projection of ones through the current projection's geometry? Apologies for the flurry of questions, I'm quite new to this and doing my best to learn as I go! If you are able to implement a command line application that utilizes the warp projectors, that would be simply amazing for me! My Frankenstein implementation is a bit of a mess, after all. I greatly appreciate all of your help! Best, Ryan ________________________________ From: Simon Rit Sent: Thursday, April 17, 2025 7:48 AM To: Andosca, Ryan Cc: rtk-users at openrtk.org Subject: Re: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction Caution: External sender from outside our organization. Proceed with caution with regard to links and attachments. Report Suspicious Hi Ryan, If you check our code, the normalization step is quite simplistic to avoid a backprojection: we simply calculate and divide by the length of the intersection between each ray and the "image box" (parallelepiped corresponding to the space occupied by the volume) using the rtk::RayBoxIntersectionImageFilter. This is exactly what you should get if you forward project a volume of 1 with our Joseph forward projectors. See the diagram from the implementation of SART: https://www.openrtk.org/Doxygen/classrtk_1_1SARTConeBeamReconstructionFilter.html Maybe you can try to do the same here? For what you want to do, it would be better to use a forward and backprojector combining the projection and a vector field... We have it available for the backprojection and for the forward, maybe we should try to implement it in the SART command line application for you... I have opened an issue to track this: https://github.com/RTKConsortium/RTK/issues/721 Cheers, Simon On Mon, Apr 14, 2025 at 6:25?PM Andosca, Ryan > wrote: Hi, So, my experience with coding, especially in C, is extremely limited. I mostly work in MATLAB, and as such I've been primarily using the built-in console applications to perform reconstruction on CBCT datasets. The rtksart application works great and provides a beautiful reconstruction. My goal is to effectively do the same reconstruction, but tack on motion compensation to each iteration. After delving into the algorithm implementation in rtksart as much as I am able, I wrote a MATLAB workflow that boils down to: * Start with image of zeros * Randomize gantry angle order * Iterate over gantry angles: * Load current projection forward and backward motion compensation DVFs * Deform current image iteration to current gantry angle breathing state * Forward project deformed image * Subtract deformed projections from raw CBCT image data * Back project the subtraction image * NORMALIZE BACK PROJECTION (this is where I think my issue is) * Deform normalized back projection back to the original image space * Update current image: img = img + lambda * (normalized_back_projection / number_of_gantry_angles) * Note: This step, as it also contains normalization, may also be a source of my issue... * Set negatives to 0 * Repeat I get an image that is much as I expect - a much sharper diaphragm and generally higher definition in soft tissue structures. So I must be generally on the right track. The issue is that the normalization is clearly off in some way. I have a bright circle artifact in the middle of the image and some striping throughout. Try as I might to understand it, I couldn't discern how normalization was done in the rtksart application. So, this is the method I used: * Create an image of 1s of the size of the final (SART reconstructed) image. * Forward project through this image of 1s * Back project through the resulting forward projection * In the normalization step in the MCSART process detailed above, I simply divide by this resulting back projection. Knowing all of that, any idea what my issue might be leading to my artifacts? Ultimately, I know it would be more ideal if I just altered the rtksart application itself to include motion compensation, and then I'd just be using the built-in normalization that clearly works, but I find my abilities lacking when I attempt to do this... So instead I am just seeking to understand how that application implements normalization, so that I can use a similar normalization in my algorithm! I greatly appreciate any help provided! Best, R. Andosca GSR UCLA Health ________________________________ UCLA HEALTH SCIENCES IMPORTANT WARNING: This email (and any attachments) is only intended for the use of the person or entity to which it is addressed, and may contain information that is privileged and confidential. You, the recipient, are obligated to maintain it in a safe, secure and confidential manner. Unauthorized redisclosure or failure to maintain confidentiality may subject you to federal and state penalties. If you are not the intended recipient, please immediately notify us by return email, and delete this message from your computer. _______________________________________________ Rtk-users mailing list rtk-users at openrtk.org https://www.creatis.insa-lyon.fr/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 286605 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 428731 bytes Desc: image.png URL: From simon.rit at creatis.insa-lyon.fr Fri Apr 18 09:10:19 2025 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Fri, 18 Apr 2025 09:10:19 +0200 Subject: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction In-Reply-To: References: Message-ID: Hi, Yes, everything you said in your first email is correct. But you can replace "forward projection of ones (using your Joseph forward projector)" by the rtk::RayBoxIntersectionImageFilter as implemented in RTK to make things faster. The additional blurring in your images is most likely due to the intermediate interpolation steps for warping. Are your projection images truncated? If yes, you may want to add the offset detector image filter as in the application to avoid the artifact you mention: https://github.com/RTKConsortium/RTK/blob/master/include/rtkSARTConeBeamReconstructionFilter.hxx#L44 This has been described in this paper: https://doi.org/10.1088/0031-9155/58/2/205 Simon On Thu, Apr 17, 2025 at 10:48?PM Andosca, Ryan wrote: > As an update, I swapped to the voxel-based back projection (from > CudaRaycast) and normalized in just one step by dividing the forward > projection by a forward projection of ones. The image quality did improve a > little, but the main artifact of concern is still present. To describe the > artifact better, it's a bright cylinder near the center of the patient. I'm > now wondering if this may be a geometry error of some kind rather than > normalization... No such artifact exists when I use the rtksart > application, which uses the same geometry file I use in my MCSART MATLAB > implementation. But there are other areas where geometry artifacts might > originate, like improperly updating file headers etc... > > Since my coding is very clunky, my code involves a lot of saving out > variables, sending them to RTK applications, then loading the results back > into MATLAB for deformation. Not exactly streamlined, I know. But perhaps > some header information isn't being sent to the RTK applications that > should be at some point, or the header information is off slightly > somewhere. I'll attach an image of the artifact in question to this email, > though I'm not sure it will appear on the message board. > > In the images, left is the rtksart output and right is my MCSART result > (after one iteration). Also notice that the SART image appears sharper than > MCSART - I can't be sure if that's a function of noise or what, but that > difference may also speak to the issue for all I know... That diaphragm > sure gets sharper though, so motion compensation is at least doing what it > should be doing. > > > > > ------------------------------ > *From:* Andosca, Ryan > *Sent:* Thursday, April 17, 2025 8:55 AM > *To:* Simon Rit > *Cc:* rtk-users at openrtk.org > *Subject:* Re: [Rtk-users] Implementing a Motion-Compensated SART > (MCSART) Reconstruction > > Hi Simon, > > Thank you so much for your reply! If I understand correctly, I just need > to divide the forward projection step by a forward projection of ones > (using your Joseph forward projector, which I do), and if I use voxel-wise > back projection I needn't apply a secondary normalization to the back > projection step? Is there a downside, maybe computationally, to this > implementation? And if I do use Joseph back projection, do I normalize by > dividing by a Joseph back projection of ones through the current > projection's geometry? Apologies for the flurry of questions, I'm quite new > to this and doing my best to learn as I go! > > If you are able to implement a command line application that utilizes the > warp projectors, that would be simply amazing for me! My Frankenstein > implementation is a bit of a mess, after all. > > I greatly appreciate all of your help! > > Best, > Ryan > ------------------------------ > *From:* Simon Rit > *Sent:* Thursday, April 17, 2025 7:48 AM > *To:* Andosca, Ryan > *Cc:* rtk-users at openrtk.org > *Subject:* Re: [Rtk-users] Implementing a Motion-Compensated SART > (MCSART) Reconstruction > > Caution: External sender from outside our organization. > Proceed with caution with regard to links and attachments. > Report Suspicious > > > Hi Ryan, > If you check our code, the normalization step is quite simplistic to avoid > a backprojection: we simply calculate and divide by the length of the > intersection between each ray and the "image box" (parallelepiped > corresponding to the space occupied by the volume) using the > rtk::RayBoxIntersectionImageFilter > . > This is exactly what you should get if you forward project a volume of 1 > with our Joseph forward projectors. See the diagram from the implementation > of SART: > > https://www.openrtk.org/Doxygen/classrtk_1_1SARTConeBeamReconstructionFilter.html > > Maybe you can try to do the same here? > For what you want to do, it would be better to use a forward and > backprojector combining the projection and a vector field... We have it > available for the backprojection > and > for the forward > , > maybe we should try to implement it in the SART command line application > for you... I have opened an issue to track this: > https://github.com/RTKConsortium/RTK/issues/721 > > Cheers, > Simon > > > On Mon, Apr 14, 2025 at 6:25?PM Andosca, Ryan > wrote: > > Hi, > > So, my experience with coding, especially in C, is extremely limited. I > mostly work in MATLAB, and as such I've been primarily using the built-in > console applications to perform reconstruction on CBCT datasets. The > rtksart application works great and provides a beautiful reconstruction. My > goal is to effectively do the same reconstruction, but tack on motion > compensation to each iteration. > > After delving into the algorithm implementation in rtksart as much as I am > able, I wrote a MATLAB workflow that boils down to: > > - Start with image of zeros > - Randomize gantry angle order > - Iterate over gantry angles: > - Load current projection forward and backward motion compensation DVFs > - Deform current image iteration to current gantry angle breathing > state > - Forward project deformed image > - Subtract deformed projections from raw CBCT image data > - Back project the subtraction image > - NORMALIZE BACK PROJECTION (this is where I think my issue is) > - Deform normalized back projection back to the original image space > - Update current image: img = img + lambda * > (normalized_back_projection / number_of_gantry_angles) > - Note: This step, as it also contains normalization, may also be a > source of my issue... > - Set negatives to 0 > - Repeat > > > I get an image that is much as I expect - a much sharper diaphragm and > generally higher definition in soft tissue structures. So I must be > generally on the right track. The issue is that the normalization is > clearly off in some way. I have a bright circle artifact in the middle of > the image and some striping throughout. Try as I might to understand it, I > couldn't discern how normalization was done in the rtksart application. So, > this is the method I used: > > - Create an image of 1s of the size of the final (SART reconstructed) > image. > - Forward project through this image of 1s > - Back project through the resulting forward projection > - In the normalization step in the MCSART process detailed above, I > simply divide by this resulting back projection. > > > Knowing all of that, any idea what my issue might be leading to my > artifacts? Ultimately, I know it would be more ideal if I just altered the > rtksart application itself to include motion compensation, and then I'd > just be using the built-in normalization that clearly works, but I find my > abilities lacking when I attempt to do this... So instead I am just seeking > to understand how that application implements normalization, so that I can > use a similar normalization in my algorithm! > > I greatly appreciate any help provided! > > Best, > R. Andosca > GSR > UCLA Health > > > > > > ------------------------------ > > UCLA HEALTH SCIENCES IMPORTANT WARNING: This email (and any attachments) > is only intended for the use of the person or entity to which it is > addressed, and may contain information that is privileged and confidential. > You, the recipient, are obligated to maintain it in a safe, secure and > confidential manner. Unauthorized redisclosure or failure to maintain > confidentiality may subject you to federal and state penalties. If you are > not the intended recipient, please immediately notify us by return email, > and delete this message from your computer. > _______________________________________________ > Rtk-users mailing list > rtk-users at openrtk.org > https://www.creatis.insa-lyon.fr/mailman/listinfo/rtk-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 286605 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 428731 bytes Desc: not available URL: From RAndosca at mednet.ucla.edu Wed Apr 23 23:08:15 2025 From: RAndosca at mednet.ucla.edu (Andosca, Ryan) Date: Wed, 23 Apr 2025 21:08:15 +0000 Subject: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction In-Reply-To: References: Message-ID: Hello, That worked! I implemented a version of the offset detector filter based on the paper and that eliminated the artifact. So I now have a working version of MCSART, which is exciting! Big thanks for that! My next step is to try and speed it up - right now, a two iteration MCSART using my MATLAB implementation takes about 8 hours, which is pretty steep for what I need to use it for. I currently have a lot of time wasted on I/O stuff because I'm using the command line applications to perform operations one projection at a time, so my thought to speed it up is to at least use the Python bindings to eliminate that aspect of it. I think the ideal solution would be to write my own filter that implements deformation before forward projection and after back projection and otherwise is identical to the SART filter, but my talents are C are nonexistent as I've mentioned, so that may be beyond me. In any case I appreciate all of your help, and I'll keep an eye on the issue that you linked in your first message! Best, Ryan ________________________________ From: Simon Rit Sent: Friday, April 18, 2025 12:10 AM To: Andosca, Ryan Cc: rtk-users at openrtk.org Subject: Re: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction Hi, Yes, everything you said in your first email is correct. But you can replace "forward projection of ones (using your Joseph forward projector)" by the rtk:?:?RayBoxIntersectionImageFilter as implemented in RTK to make things faster.? Hi, Yes, everything you said in your first email is correct. But you can replace "forward projection of ones (using your Joseph forward projector)" by the rtk::RayBoxIntersectionImageFilter as implemented in RTK to make things faster. The additional blurring in your images is most likely due to the intermediate interpolation steps for warping. Are your projection images truncated? If yes, you may want to add the offset detector image filter as in the application to avoid the artifact you mention: https://github.com/RTKConsortium/RTK/blob/master/include/rtkSARTConeBeamReconstructionFilter.hxx#L44 This has been described in this paper: https://doi.org/10.1088/0031-9155/58/2/205[X] Simon On Thu, Apr 17, 2025 at 10:48?PM Andosca, Ryan > wrote: As an update, I swapped to the voxel-based back projection (from CudaRaycast) and normalized in just one step by dividing the forward projection by a forward projection of ones. The image quality did improve a little, but the main artifact of concern is still present. To describe the artifact better, it's a bright cylinder near the center of the patient. I'm now wondering if this may be a geometry error of some kind rather than normalization... No such artifact exists when I use the rtksart application, which uses the same geometry file I use in my MCSART MATLAB implementation. But there are other areas where geometry artifacts might originate, like improperly updating file headers etc... Since my coding is very clunky, my code involves a lot of saving out variables, sending them to RTK applications, then loading the results back into MATLAB for deformation. Not exactly streamlined, I know. But perhaps some header information isn't being sent to the RTK applications that should be at some point, or the header information is off slightly somewhere. I'll attach an image of the artifact in question to this email, though I'm not sure it will appear on the message board. In the images, left is the rtksart output and right is my MCSART result (after one iteration). Also notice that the SART image appears sharper than MCSART - I can't be sure if that's a function of noise or what, but that difference may also speak to the issue for all I know... That diaphragm sure gets sharper though, so motion compensation is at least doing what it should be doing. [cid:ii_19647b65af2cb971f162] [cid:ii_19647b65af2cb971f161] ________________________________ From: Andosca, Ryan > Sent: Thursday, April 17, 2025 8:55 AM To: Simon Rit > Cc: rtk-users at openrtk.org > Subject: Re: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction Hi Simon, Thank you so much for your reply! If I understand correctly, I just need to divide the forward projection step by a forward projection of ones (using your Joseph forward projector, which I do), and if I use voxel-wise back projection I needn't apply a secondary normalization to the back projection step? Is there a downside, maybe computationally, to this implementation? And if I do use Joseph back projection, do I normalize by dividing by a Joseph back projection of ones through the current projection's geometry? Apologies for the flurry of questions, I'm quite new to this and doing my best to learn as I go! If you are able to implement a command line application that utilizes the warp projectors, that would be simply amazing for me! My Frankenstein implementation is a bit of a mess, after all. I greatly appreciate all of your help! Best, Ryan ________________________________ From: Simon Rit > Sent: Thursday, April 17, 2025 7:48 AM To: Andosca, Ryan > Cc: rtk-users at openrtk.org > Subject: Re: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction Caution: External sender from outside our organization. Proceed with caution with regard to links and attachments. Report Suspicious Hi Ryan, If you check our code, the normalization step is quite simplistic to avoid a backprojection: we simply calculate and divide by the length of the intersection between each ray and the "image box" (parallelepiped corresponding to the space occupied by the volume) using the rtk::RayBoxIntersectionImageFilter. This is exactly what you should get if you forward project a volume of 1 with our Joseph forward projectors. See the diagram from the implementation of SART: https://www.openrtk.org/Doxygen/classrtk_1_1SARTConeBeamReconstructionFilter.html Maybe you can try to do the same here? For what you want to do, it would be better to use a forward and backprojector combining the projection and a vector field... We have it available for the backprojection and for the forward, maybe we should try to implement it in the SART command line application for you... I have opened an issue to track this: https://github.com/RTKConsortium/RTK/issues/721 Cheers, Simon On Mon, Apr 14, 2025 at 6:25?PM Andosca, Ryan > wrote: Hi, So, my experience with coding, especially in C, is extremely limited. I mostly work in MATLAB, and as such I've been primarily using the built-in console applications to perform reconstruction on CBCT datasets. The rtksart application works great and provides a beautiful reconstruction. My goal is to effectively do the same reconstruction, but tack on motion compensation to each iteration. After delving into the algorithm implementation in rtksart as much as I am able, I wrote a MATLAB workflow that boils down to: * Start with image of zeros * Randomize gantry angle order * Iterate over gantry angles: * Load current projection forward and backward motion compensation DVFs * Deform current image iteration to current gantry angle breathing state * Forward project deformed image * Subtract deformed projections from raw CBCT image data * Back project the subtraction image * NORMALIZE BACK PROJECTION (this is where I think my issue is) * Deform normalized back projection back to the original image space * Update current image: img = img + lambda * (normalized_back_projection / number_of_gantry_angles) * Note: This step, as it also contains normalization, may also be a source of my issue... * Set negatives to 0 * Repeat I get an image that is much as I expect - a much sharper diaphragm and generally higher definition in soft tissue structures. So I must be generally on the right track. The issue is that the normalization is clearly off in some way. I have a bright circle artifact in the middle of the image and some striping throughout. Try as I might to understand it, I couldn't discern how normalization was done in the rtksart application. So, this is the method I used: * Create an image of 1s of the size of the final (SART reconstructed) image. * Forward project through this image of 1s * Back project through the resulting forward projection * In the normalization step in the MCSART process detailed above, I simply divide by this resulting back projection. Knowing all of that, any idea what my issue might be leading to my artifacts? Ultimately, I know it would be more ideal if I just altered the rtksart application itself to include motion compensation, and then I'd just be using the built-in normalization that clearly works, but I find my abilities lacking when I attempt to do this... So instead I am just seeking to understand how that application implements normalization, so that I can use a similar normalization in my algorithm! I greatly appreciate any help provided! Best, R. Andosca GSR UCLA Health ________________________________ UCLA HEALTH SCIENCES IMPORTANT WARNING: This email (and any attachments) is only intended for the use of the person or entity to which it is addressed, and may contain information that is privileged and confidential. You, the recipient, are obligated to maintain it in a safe, secure and confidential manner. Unauthorized redisclosure or failure to maintain confidentiality may subject you to federal and state penalties. If you are not the intended recipient, please immediately notify us by return email, and delete this message from your computer. _______________________________________________ Rtk-users mailing list rtk-users at openrtk.org https://www.creatis.insa-lyon.fr/mailman/listinfo/rtk-users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 286605 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 428731 bytes Desc: image.png URL: From simon.rit at creatis.insa-lyon.fr Thu Apr 24 14:59:15 2025 From: simon.rit at creatis.insa-lyon.fr (Simon Rit) Date: Thu, 24 Apr 2025 14:59:15 +0200 Subject: [Rtk-users] Implementing a Motion-Compensated SART (MCSART) Reconstruction In-Reply-To: References: Message-ID: Thanks for sharing, good to hear. We'll try to implement these forward and backprojectors, that should not be hard. If I/O is really the issue, I think there is a way to use the ITK python packages in Matlab. I can dig in my archives if it's useful. Simon On Wed, Apr 23, 2025 at 11:08?PM Andosca, Ryan wrote: > Hello, > > That worked! I implemented a version of the offset detector filter based > on the paper and that eliminated the artifact. So I now have a working > version of MCSART, which is exciting! Big thanks for that! > > My next step is to try and speed it up - right now, a two iteration MCSART > using my MATLAB implementation takes about 8 hours, which is pretty steep > for what I need to use it for. I currently have a lot of time wasted on I/O > stuff because I'm using the command line applications to perform operations > one projection at a time, so my thought to speed it up is to at least use > the Python bindings to eliminate that aspect of it. I think the ideal > solution would be to write my own filter that implements deformation before > forward projection and after back projection and otherwise is identical to > the SART filter, but my talents are C are nonexistent as I've mentioned, so > that may be beyond me. > > In any case I appreciate all of your help, and I'll keep an eye on the > issue that you linked in your first message! > > Best, > Ryan > ------------------------------ > *From:* Simon Rit > *Sent:* Friday, April 18, 2025 12:10 AM > *To:* Andosca, Ryan > *Cc:* rtk-users at openrtk.org > *Subject:* Re: [Rtk-users] Implementing a Motion-Compensated SART > (MCSART) Reconstruction > > Hi, Yes, everything you said in your first email is correct. But you can > replace "forward projection of ones (using your Joseph forward projector)" > by the rtk: : RayBoxIntersectionImageFilter as implemented in RTK to make > things faster. > Hi, > Yes, everything you said in your first email is correct. But you can > replace "forward projection of ones (using your Joseph forward > projector)" by the rtk::RayBoxIntersectionImageFilter > > as implemented in RTK to make things faster. > The additional blurring in your images is most likely due to the > intermediate interpolation steps for warping. > Are your projection images truncated? If yes, you may want to add the > offset detector image filter as in the application to avoid the artifact > you mention: > > https://github.com/RTKConsortium/RTK/blob/master/include/rtkSARTConeBeamReconstructionFilter.hxx#L44 > > This has been described in this paper: > https://doi.org/10.1088/0031-9155/58/2/205 > > > Simon > > On Thu, Apr 17, 2025 at 10:48?PM Andosca, Ryan > wrote: > > As an update, I swapped to the voxel-based back projection (from > CudaRaycast) and normalized in just one step by dividing the forward > projection by a forward projection of ones. The image quality did improve a > little, but the main artifact of concern is still present. To describe the > artifact better, it's a bright cylinder near the center of the patient. I'm > now wondering if this may be a geometry error of some kind rather than > normalization... No such artifact exists when I use the rtksart > application, which uses the same geometry file I use in my MCSART MATLAB > implementation. But there are other areas where geometry artifacts might > originate, like improperly updating file headers etc... > > Since my coding is very clunky, my code involves a lot of saving out > variables, sending them to RTK applications, then loading the results back > into MATLAB for deformation. Not exactly streamlined, I know. But perhaps > some header information isn't being sent to the RTK applications that > should be at some point, or the header information is off slightly > somewhere. I'll attach an image of the artifact in question to this email, > though I'm not sure it will appear on the message board. > > In the images, left is the rtksart output and right is my MCSART result > (after one iteration). Also notice that the SART image appears sharper than > MCSART - I can't be sure if that's a function of noise or what, but that > difference may also speak to the issue for all I know... That diaphragm > sure gets sharper though, so motion compensation is at least doing what it > should be doing. > > > > > ------------------------------ > *From:* Andosca, Ryan > *Sent:* Thursday, April 17, 2025 8:55 AM > *To:* Simon Rit > *Cc:* rtk-users at openrtk.org > *Subject:* Re: [Rtk-users] Implementing a Motion-Compensated SART > (MCSART) Reconstruction > > Hi Simon, > > Thank you so much for your reply! If I understand correctly, I just need > to divide the forward projection step by a forward projection of ones > (using your Joseph forward projector, which I do), and if I use voxel-wise > back projection I needn't apply a secondary normalization to the back > projection step? Is there a downside, maybe computationally, to this > implementation? And if I do use Joseph back projection, do I normalize by > dividing by a Joseph back projection of ones through the current > projection's geometry? Apologies for the flurry of questions, I'm quite new > to this and doing my best to learn as I go! > > If you are able to implement a command line application that utilizes the > warp projectors, that would be simply amazing for me! My Frankenstein > implementation is a bit of a mess, after all. > > I greatly appreciate all of your help! > > Best, > Ryan > ------------------------------ > *From:* Simon Rit > *Sent:* Thursday, April 17, 2025 7:48 AM > *To:* Andosca, Ryan > *Cc:* rtk-users at openrtk.org > *Subject:* Re: [Rtk-users] Implementing a Motion-Compensated SART > (MCSART) Reconstruction > > Caution: External sender from outside our organization. > Proceed with caution with regard to links and attachments. > Report Suspicious > > > Hi Ryan, > If you check our code, the normalization step is quite simplistic to avoid > a backprojection: we simply calculate and divide by the length of the > intersection between each ray and the "image box" (parallelepiped > corresponding to the space occupied by the volume) using the > rtk::RayBoxIntersectionImageFilter > . > This is exactly what you should get if you forward project a volume of 1 > with our Joseph forward projectors. See the diagram from the implementation > of SART: > > https://www.openrtk.org/Doxygen/classrtk_1_1SARTConeBeamReconstructionFilter.html > > Maybe you can try to do the same here? > For what you want to do, it would be better to use a forward and > backprojector combining the projection and a vector field... We have it > available for the backprojection > and > for the forward > , > maybe we should try to implement it in the SART command line application > for you... I have opened an issue to track this: > https://github.com/RTKConsortium/RTK/issues/721 > > Cheers, > Simon > > > On Mon, Apr 14, 2025 at 6:25?PM Andosca, Ryan > wrote: > > Hi, > > So, my experience with coding, especially in C, is extremely limited. I > mostly work in MATLAB, and as such I've been primarily using the built-in > console applications to perform reconstruction on CBCT datasets. The > rtksart application works great and provides a beautiful reconstruction. My > goal is to effectively do the same reconstruction, but tack on motion > compensation to each iteration. > > After delving into the algorithm implementation in rtksart as much as I am > able, I wrote a MATLAB workflow that boils down to: > > - Start with image of zeros > - Randomize gantry angle order > - Iterate over gantry angles: > - Load current projection forward and backward motion compensation DVFs > - Deform current image iteration to current gantry angle breathing > state > - Forward project deformed image > - Subtract deformed projections from raw CBCT image data > - Back project the subtraction image > - NORMALIZE BACK PROJECTION (this is where I think my issue is) > - Deform normalized back projection back to the original image space > - Update current image: img = img + lambda * > (normalized_back_projection / number_of_gantry_angles) > - Note: This step, as it also contains normalization, may also be a > source of my issue... > - Set negatives to 0 > - Repeat > > > I get an image that is much as I expect - a much sharper diaphragm and > generally higher definition in soft tissue structures. So I must be > generally on the right track. The issue is that the normalization is > clearly off in some way. I have a bright circle artifact in the middle of > the image and some striping throughout. Try as I might to understand it, I > couldn't discern how normalization was done in the rtksart application. So, > this is the method I used: > > - Create an image of 1s of the size of the final (SART reconstructed) > image. > - Forward project through this image of 1s > - Back project through the resulting forward projection > - In the normalization step in the MCSART process detailed above, I > simply divide by this resulting back projection. > > > Knowing all of that, any idea what my issue might be leading to my > artifacts? Ultimately, I know it would be more ideal if I just altered the > rtksart application itself to include motion compensation, and then I'd > just be using the built-in normalization that clearly works, but I find my > abilities lacking when I attempt to do this... So instead I am just seeking > to understand how that application implements normalization, so that I can > use a similar normalization in my algorithm! > > I greatly appreciate any help provided! > > Best, > R. Andosca > GSR > UCLA Health > > > > > > ------------------------------ > > UCLA HEALTH SCIENCES IMPORTANT WARNING: This email (and any attachments) > is only intended for the use of the person or entity to which it is > addressed, and may contain information that is privileged and confidential. > You, the recipient, are obligated to maintain it in a safe, secure and > confidential manner. Unauthorized redisclosure or failure to maintain > confidentiality may subject you to federal and state penalties. If you are > not the intended recipient, please immediately notify us by return email, > and delete this message from your computer. > _______________________________________________ > Rtk-users mailing list > rtk-users at openrtk.org > https://www.creatis.insa-lyon.fr/mailman/listinfo/rtk-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 286605 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 428731 bytes Desc: not available URL: