GPU Optimization of Convolution for Large 3-D Real Images

Investor logo

Warning

This publication doesn't include Institute of Computer Science. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

KARAS Pavel SVOBODA David ZEMČÍK Pavel

Year of publication 2012
Type Article in Proceedings
Conference Proceedings of the International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS’12)
MU Faculty or unit

Faculty of Informatics

Citation
Web http://dx.doi.org/10.1007/978-3-642-33140-4_6
Doi http://dx.doi.org/10.1007/978-3-642-33140-4_6
Field Informatics
Keywords gpu; convolution; 3-D; image processing
Description In this paper, we propose a method for computing convolution of large 3-D images with respect to real signals. The convolution is performed in a frequency domain using a convolution theorem. Due to properties of real signals, the algorithm can be optimized so that both time and the memory consumption are halved when compared to complex signals of the same size. Convolution is decomposed in a frequency domain using the decimation in frequency (DIF) algorithm. The algorithm is accelerated on a graphics hardware by means of the CUDA parallel computing model, achieving up to 10x speedup with a single GPU over an optimized implementation on a quad-core CPU.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info