[Rtk-users] RTK

Robert Calließ robert.calliess at gmx.de
Fri Jan 30 10:44:04 CET 2015


Sorry, and again to the mailing list.

I just signed up.

 

 

Hello Simon,

thank you for the fast reply.

 

„Joseph's method samples the ray with one pixel per slice in the main
direction but it does not compute the intersection of the ray with each
voxel. Siddon's method <http://www.ncbi.nlm.nih.gov/pubmed/4000088>  does
that. In fig 2 of [Xu and Mueller, 2006]
<http://www3.cs.stonybrook.edu/~mueller/papers/ISBI_06_quality_2.pdf> ,
Joseph is referred to "slice interpolated" and Siddon to
"box-line-integrated".
OK, thanks for that hint. I think it will maybe also has the same problem
with divergent rays and missing voxels during the reconstruction. Do you
have a link to  a paper or source to this algorithm. So the algorithm may
work 

As following: 

-          calculate intersection with the physical bounding volume (entry
exit points)

-          from entrypoint, determine the 4 voxels that surround this
entrypoint and bilinear interpolate the value at this position and sum it up

-          go to next plane (plane that is most “perpendicular” to the
current center ray (focus to detector center) ?

-          at the end, the sum is normalized by the ray length ?
(length(exitpoint – entrypoint))

Is that right ?

 

How can Joseph’s method be used for back projection ?

 

“This sounds very interesting, don't hesitate to share the code and/or the
publication! BTW, what is DDA”

DDA stands for digital differential analyzer. I use this approach for a
voxel-based forward projector. When I started my project I was looking for a
fast and easy forward projector on voxel basis.

I found an article about raytracing and acceleration structures where these
people used this 3d dda. They actually needed it to traverse the bounding
hierarchy to get to know what geometry

the ray intersects with. I slightly modified it to get the intersecetion
length of a ray within a voxel by substraction of the current and previous
step width. I’v attached a zip file that contains this

modification and the original source code. The original source code is from
www.scratchapixel.com and there is also an article about this topic
(http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-12-introductio
n-to-acceleration-structures/what-else/).

If you have a look at my code you’ll maybe miss some matrix calculation
stuff etc. I use a scene based approach where I place the focus, object and
detector in a so called record. Each record represents the scene geometry at
a certain time, when a projection image was taken.

Before reconstruction starts I calculate all these positions. I thought this
could be a good approach to decouple the actuall reconstruction algorithm
from the scene geometry.

 

 

“But generally we use matching resolution between pixels and voxels so the
problem is minimal.”

Do you mean, that you use a volume resolution that fits to the current
geometry setting and the detector’s pixel resolution ?

 

“Thanks for the last trick, I am aware of it (Riddell published it calling
this  <http://dx.doi.org/10.1109/TMI.2006.876169> "Rectification"), I'm not
sure that would change the computation time by a large factor but I should
check. I think you then need an additional interpolation no to resample the
"moved" object, no”

Not almost the same. What I mean seems to be simplier. Lets assume the
detector has tilted around the x-axis by 0.25 degrees. The object’s center
is at 0,0,0 and rotation axis is 0,1,0. In FDK you usually got a ray from
source to the voxel center and then you calculate

the intersection of this ray at the detector plane. To avoid ray plane
intersection calculus, we can rotate the whole system by 0.25 degrees. Means
that the detectors normal now is parallel to the z axis.Of course the
rotation axis is not 0,1,0 any longer and the focus (xray source)

also is a bit rotated. 

 

“For the backprojection, we typically used voxel-based backprojection using
the center of the voxel which is faster than what you (seem to) use”

I think here we mixed something up. This type of back projection is used for
FDK. All my questions were related to S-ART. I need to calculate the weights

of a voxel for the back projection. To speed it up, I project its vertices
on the detector plane, calc the MEB and the send rays from within this MEB
through the

voxel and caluclate the intersection length so those rays that will make my
weights.

 

 

Best regards,

Robert

 

 

P.S. Hello to all the other people here in the mailing list.

 

 

Von: simon.rit at gmail.com [mailto:simon.rit at gmail.com] Im Auftrag von Simon
Rit
Gesendet: Donnerstag, 29. Januar 2015 20:58
An: Robert Calließ
Cc: rtk-users at openrtk.org
Betreff: Re: RTK

 

Hi,
Thank you for your interest in RTK. Please use the mailing list for
questions that are of interest to anyone using RTK.

There are many ways to model the direct problem (forward projection).
Without going into too many details (available in the publications of each
method) :
- "As far as I understand the goal of this approach is to calculate the
intersection length of a ray through a voxel, right ?" False. Joseph's
method samples the ray with one pixel per slice in the main direction but it
does not compute the intersection of the ray with each voxel. Siddon's
method <http://www.ncbi.nlm.nih.gov/pubmed/4000088>  does that. In fig 2 of
[Xu and Mueller, 2006]
<http://www3.cs.stonybrook.edu/~mueller/papers/ISBI_06_quality_2.pdf> ,
Joseph is referred to "slice interpolated" and Siddon to
"box-line-integrated".
- "I can calculate the intersection length of the ray within a voxel by a
simple substraction, this runs very fast." This sounds very interesting,
don't hesitate to share the code and/or the publication! BTW, what is DDA?

- Small voxels / pixels,  "Did you find a way to handle this ? " We don't
handle this in RTK except if you consider that spatial regularisation (e.g.,
total variation) will overcome this problem in a way. But generally we use
matching resolution between pixels and voxels so the problem is minimal. For
the backprojection, we typically used voxel-based backprojection using the
center of the voxel which is faster than what you (seem to) use. I think
that if these things are a problem for you, there is a nice solution called
distance driven (back-)projection <http://stacks.iop.org/PMB/49/2463>  (by
De Man and Basu). I think it will do exactly what you want. I haven't
implemented it in RTK (yet).

Thanks for the last trick, I am aware of it (Riddell published it calling
this  <http://dx.doi.org/10.1109/TMI.2006.876169> "Rectification"), I'm not
sure that would change the computation time by a large factor but I should
check. I think you then need an additional interpolation no to resample the
"moved" object, no?

I hope this helps. Let me know if sg is not clear in my answer!
Cheers,
Simon

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.creatis.insa-lyon.fr/pipermail/rtk-users/attachments/20150130/b3973dfd/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: voxelfp.7z
Type: application/octet-stream
Size: 102414 bytes
Desc: not available
URL: <http://www.creatis.insa-lyon.fr/pipermail/rtk-users/attachments/20150130/b3973dfd/attachment.obj>


More information about the Rtk-users mailing list