A Comparison of Performance Between a CPU and a GPU on Prime Factorization Using Eratosthene's Sieve and Trial Division

Detta är en Kandidat-uppsats från KTH/Skolan för datavetenskap och kommunikation (CSC)

Författare: Caroline W. Borg; Erik Dackebro; [2017]

Nyckelord: ;

Sammanfattning: There has been remarkable advancement in Multi-cored Processing Units over the past decade. GPUs, which were originally designed as a specialized graphics processor, are today used in a wide variety of other areas. Their ability to solve parallel problems is unmatched due to their massive amount of simultaneously running cores. Despite this, most algorithms in use today are still fully sequential and do not utilize the processing power available. The Sieve of Eratosthenes and Trial Division are two very naive algorithms which combined can be used to find a number's unique combinataion of prime factors. This paper sought to compare the performance of a CPU and a GPU when tasked with prime factorization on seven different data sets. Five different programs were created, two running both algorithms on the CPU, two running both algorithms on the GPU and one program utilizing both. Each data set was presented multiple times to each program in different sizes ranging from one to half a million. The result was uniform in that the CPU greatly outperformed the GPU in every test case for this specific implementation.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)