OpenMP in Small Bites/Tasking
OpenMP in Small Bites/Tasking /
Jump to navigation
Jump to search
Revision as of 20:05, 30 November 2020 by Tim-cramer-c1e6@rwth-aachen.de (talk | contribs)
| Tutorial | |
|---|---|
| Title: | OpenMP in Small Bites |
| Provider: | HPC.NRW
|
| Contact: | tutorials@hpc.nrw |
| Type: | Multi-part video |
| Topic Area: | Programming Paradigms |
| License: | CC-BY-SA |
| Syllabus
| |
| 1. Overview | |
| 2. Worksharing | |
| 3. Data Scoping | |
| 4. False Sharing | |
| 5. Tasking | |
| 6. Tasking and Data Scoping | |
| 7. Tasking and Synchronization | |
| 8. Loops and Tasks | |
| 9. Tasking Example: Sudoku Solver | |
| 10. Task Scheduling | |
| 11. Non-Uniform Memory Access | |
This video introduces another way to to express parallelism in OpenMP: Tasking. This concept enables the programmer to parallelize code regions with non-canonical loop forms or regions which do not use loops at all (including recursive algorithms). The video explains how use OpenMP tasking, how to synchonize, how to deal with cut-off strategies and how an OpenMP runtime environment manages the tasks in queues. More examples and details about Tasking and Data Scoping as well as Tasking_and_Synchronization are given in further parts of this tutorial.
Video
Quiz
1. What is the default data scoping of a variable in a task scope? Hint: Assume that the variable is declared before the task region, but in the same function.
2. Which tasks are synchronized with by a
taskwait construct?
3. Why can it be benficial for the performance to use cut-off strategies for task-based OpenMP programs?