[Rtk-users] Use existing Images when copying from/to GPU

Simon Rit simon.rit at creatis.insa-lyon.fr
Mon Jul 8 23:13:05 CEST 2019


On Mon, Jul 8, 2019 at 10:45 PM C S <clem.schmid at gmail.com> wrote:

> Hi Simon,
>
> thank you for your swift reply and suggestions!
>
> In fact I'm already using your snippet
> <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70> for
> cpu->gpu transfer. My main issue is using the existing cpu image when
> transfering back to cpu, which I have not been able to do. I can use
> the itk.ImageDuplicator for getting the data into RAM but I haven't found a
> way to point the itk.ImageDuplicator's *output* to an exisiting Image. It
> always creates a new Image and allocates new memory AFAIK.
>
I'm not sure I understand but have you tried grafting the Image or
CudaImage to an existing itk::Image (the Graft function)?


>
> Once I can do that the next optimization step would be to also use an
> exisiting CudaImage (with according buffer) when transfering cpu->gpu, *contrary
> to your snippet.* For that I have found no way at all so far.
>
Again, I'm not sure I understand but you should be able to graft a
CudaImage to another CudaImage.


> To clarify, I do not want to perform my own CUDA computations. Using RTK's
> CUDA forward/backprojectors is the main feature I want to use from RTK.
> With *explicit* I mean doing the transfer myself instead of relying on
> RTK's implicit methods implied in the source code
> <https://github.com/SimonRit/RTK/blob/master/utilities/ITKCudaCommon/include/itkCudaImage.h#L32>
> .
>
You can always ask explicit transfers by calling the functions of the data
manager (accessible via CudaImage::GetCudaDataManager())


>
>
> Best
> Clemens
>
> Am Mo., 8. Juli 2019 um 16:20 Uhr schrieb Simon Rit <
> simon.rit at creatis.insa-lyon.fr>:
>
>> Hi,
>> Conversion from Image to CudaImage is not optimal. The way I'm doing it
>> now is shown in an example in these few lines
>> <https://github.com/SimonRit/RTK/blob/master/examples/FirstReconstruction/FirstCudaReconstruction.py#L64-L70>.
>> I am aware of the problem and discussed it on the ITK forum
>> <https://discourse.itk.org/t/shadowed-functions-in-gpuimage-or-cudaimage/1614>
>> but I don't have a better solution yet.
>> I'm not sure what you mean by explicitely transferring data from/to GPU
>> but I guess you can always work with itk::Image and do your own CUDA
>> computations in the GenerateData of the ImageFilter if you don't like the
>> CudaImage mechanism.
>> I hope this helps,
>> Simon
>>
>> On Mon, Jul 8, 2019 at 10:06 PM C S <clem.schmid at gmail.com> wrote:
>>
>>> Dear RTK users,
>>>
>>> I'm looking for a way to use exisiting ITK Images (either on GPU or in
>>> RAM) when transfering data from/to GPU. That is, not only re-using the
>>> Image object, but writing into the memory where its buffer is.
>>>
>>> Why: As I'm using the Python bindings, I guess this ties in with ITK
>>> wrapping the CudaImage type. In
>>> https://github.com/SimonRit/RTK/blob/master/utilities/ITKCudaCommon/include/itkCudaImage.h#L32 I
>>> read that the memory management is done implicitly and the CudaImage can be
>>> used with CPU filters. However when using the bindings,
>>> only rtk.BackProjectionImageFilter can be used with CudaImages. The other
>>> filters complain about not being wrapped for that type.
>>>
>>> That is why I want to explicitely transfer the data from/to GPU, but
>>> preferably using the exisiting Images and buffers. I can't rely on RTK
>>> managing GPU memory implicitly.
>>>
>>>
>>> Thank you very much for your help!
>>> Clemens
>>>
>>>
>>> _______________________________________________
>>> Rtk-users mailing list
>>> Rtk-users at public.kitware.com
>>> https://public.kitware.com/mailman/listinfo/rtk-users
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.creatis.insa-lyon.fr/pipermail/rtk-users/attachments/20190708/119965da/attachment.htm>


More information about the Rtk-users mailing list