A Hardware Architecture for Scale-space Extrema Detection

Detta är en Master-uppsats från KTH/Skolan för informations- och kommunikationsteknik (ICT)

Sammanfattning: Vision based object recognition and localization have been studied widely in recent years. Often the initial step in such tasks is detection of interest points from a grey-level image. The current state-of-the-art algorithms in this domain, like Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF) suffer from low execution speeds on a GPU(graphic processing unit) based system. Generally the performance of these algorithms on a GPU is below real-time due to high computational complexity and data intensive nature and results in elevated power consumption. Since real-time performance is desirable in many vision based applications, hardware based feature detection is an emerging solution that exploits inherent parallelism in such algorithms to achieve significant speed gains. The efficient utilization of resources still remains a challenge that directly effects the cost of hardware. This work proposes a novel hardware architecture for scale-space extrema detection part of the SIFT algorithm. The implementation of proposed architecture for Xilinx Virtex-4 FPGA and its evaluation are also presented. The implementation is sufficiently generic and can be adapted to different design parameters efficiently according to the requirements of application. The achieved system performance exceeds real-time requirements (30 frames per second) on a 640 x 480 image. Synthesis results show efficient resource utilization when compared with the existing known implementations.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)