Software Architecture for Concurrency Analysis on a Virtual Platform

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Författare: Brinda Mohan; [2022]

Nyckelord: ;

Sammanfattning: With the increasing demand for faster processing, a trend of moving towards multi-core and multiprocessor environments for embedded systems is clear. In this context, concurrent programming is also on the rise. Bugs unique to concurrent programs, such as data races and deadlocks, have been known to cause unexplained and sometimes catastrophic errors in deployed programs. Several algorithms that detect potential concurrency problems have been developed to avoid these errors. Tools such as ThreadSanitizer, developed by Google, have been used to detect concurrency bugs in certain types of programs. These principles can also be applied to simulation environments such as virtual platforms, which are used in place of hardware platforms to run and test software. Implementing such a concurrency bug detector on a virtual platform can be useful, as it enhances the capabilities of the platform and enables early bug detection, which could lead to cost savings. One such tool has been developed for Ericsson’s virtual platform. The tool, which is known as SVPracer, detects concurrency bugs via dynamic analysis. It implements a happens-before algorithm that tracks the accesses to the shared variables and whether they are protected using mutexes and semaphores. The SVPracer was previously tested on small test programs. This thesis aims to scale up the usage of the tool by running tests on production software and attempts to improve the algorithm. The thesis also aims at proposing and describing a software architecture that can be used when adding dynamic analysis-based concurrency problem detection capability to a virtual platform. The algorithm improvements were implemented by adding hardware semaphore awareness and locksets, and the architecture of the race detector was outlined. Some production software tests were found to lead to huge data race reports with plenty of false positives, while others were found to respond better. One source of false positives in the algorithm was identified. The source was a discrepancy between the memory addresses being written by the software and those being checked by the algorithm. Other race reports remain to be analyzed.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)