CGO2015
From HELIX
(Created page with "__NOTITLE__ = Relaxing Program Semantics to Unleash Parallelization = Simone Campanoni, Glenn Holloway, Gu-Yeon Wei, David Brooks <br> ''Proc. International Symposium on Code G...") |
|||
(3 intermediate revisions not shown) | |||
Line 1: | Line 1: | ||
__NOTITLE__ | __NOTITLE__ | ||
- | = Relaxing Program Semantics to Unleash Parallelization = | + | = HELIX-UP: Relaxing Program Semantics to Unleash Parallelization = |
Simone Campanoni, Glenn Holloway, Gu-Yeon Wei, David Brooks | Simone Campanoni, Glenn Holloway, Gu-Yeon Wei, David Brooks | ||
Line 13: | Line 13: | ||
We have developed a parallelizing compiler and runtime that substantially improve scalability by allowing parallelized code to briefly sidestep strict adherence to language semantics at run time. | We have developed a parallelizing compiler and runtime that substantially improve scalability by allowing parallelized code to briefly sidestep strict adherence to language semantics at run time. | ||
In addition to boosting performance, our approach limits the sensitivity of parallelized code to the parameters of target CPUs (such as core-to-core communication latency) and the accuracy of data dependence analysis. | In addition to boosting performance, our approach limits the sensitivity of parallelized code to the parameters of target CPUs (such as core-to-core communication latency) and the accuracy of data dependence analysis. | ||
+ | |||
+ | [ [[media:CGO2015_Paper.pdf|Paper]] ] [ [[media:CGO2015_Slides.pptx|Slides]] ] |
Latest revision as of 18:44, 8 June 2015
HELIX-UP: Relaxing Program Semantics to Unleash Parallelization
Simone Campanoni, Glenn Holloway, Gu-Yeon Wei, David Brooks
Proc. International Symposium on Code Generation and Optimization (CGO), February, 2015
Automatic generation of parallel code for general-purpose commodity processors is a challenging computational problem.
Nevertheless, there is a lot of latent thread-level parallelism in the way sequential programs are actually used.
To convert latent parallelism into performance gains, users may be willing to compromise on the quality of a program's results.
We have developed a parallelizing compiler and runtime that substantially improve scalability by allowing parallelized code to briefly sidestep strict adherence to language semantics at run time.
In addition to boosting performance, our approach limits the sensitivity of parallelized code to the parameters of target CPUs (such as core-to-core communication latency) and the accuracy of data dependence analysis.