Skip to Content.
Sympa Menu

ppl-accel - [[ppl-accel] ] [TMS] New log in Task Accel Minutes

ppl-accel AT lists.cs.illinois.edu

Subject: Ppl-accel mailing list

List archive

[[ppl-accel] ] [TMS] New log in Task Accel Minutes


Chronological Thread 
  • From: Michael Robson <mprobson AT illinois.edu>
  • To: gplkrsh2 AT illinois.edu, mille121 AT illinois.edu, sbak5 AT illinois.edu, ppl-accel AT cs.illinois.edu
  • Subject: [[ppl-accel] ] [TMS] New log in Task Accel Minutes
  • Date: Mon, 02 May 2016 14:54:01 -0500


A new log has been added to Task: Accel Minutes by Michael Robson
The text of the log is:
<style>div#nobr br{display:none}</style><div id=nobr>
<h3>Node+Accel Meeting</h3>
<h5>2 - 2:50 in SC 4102</h5>
<h5>In Attendance: Seon, Michael, Harshitha</h5>
<h4>Agenda:</h4>
<ul>
<li>
<p>Discuss the next steps in the node level programming model</p>
</li>
<li>
<p>Discuss the short-term and future plans.</p>
</li>
<li>
<p>More integrations?</p>
<ul>
<li>How what we have will work with Xeon Phi,</li>
<li>Accelerators</li>
<li>etc</li>
</ul>
</li>
<li>
<p>Node level programming model</p>
<ul>
<li>Seamlessly work with accel and Xeon Phi</li>
<li>Task stealing: one “pe thing” per accel device</li>
<li>Xeon Phi: pe or drone model
<ul>
<li>PE per xeon phi core</li>
<li>Use special mapping to assign chares</li>
<li>The rest steal work?</li>
</ul>
</li>
<li>Feasability:
<ul>
<li>Steal tasks in clumps? like accel framework</li>
<li>Light weight tasks to diff. arch. helpful
<ul>
<li>NVlink</li>
<li>Xeon’s with integrated GPUs</li>
</ul>
</li>
</ul>
</li>
<li>Use case: slack
<ul>
<li>Unsure whether tiny task will be executed where</li>
<li>Stealing balances out imbalance</li>
<li>Think about the reverse: queue everything on GPU
<ul>
<li>Have CPU steal tasks if it’s waiting on GPU</li>
<li>Stealing could depend on table of suitability
<ul>
<li>i.e. if task is only slighter faster on GPU, still that</li>
<li>also, could steal based on data location</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li>Xeon Phi usage
<ul>
<li>Unsure about strength of new cores</li>
<li>Maps, node queue, etc?
<ul>
<li>Map PEs to all the logical threads</li>
<li>But only map chares to a subset of those</li>
</ul>
</li>
<li>Need to determine what the right ratio of slack/idle to master/PEs</li>
<li>Can we use current OpenMP semantics on Xeon Phi
<ul>
<li>Needs a PE everywhere to work</li>
</ul>
</li>
</ul>
</li>
<li>Task PEs
<ul>
<li>Only expect tasks from task queue (skip looking at node queue)</li>
<li>Easier to implemente</li>
<li>Exisiting cuts</li>
<li>OpenMP infrastructure</li>
<li>Map of PEs and Chares</li>
<li>“task PEs”</li>
<li>what else?
<ul>
<li>Drones?</li>
</ul>
</li>
</ul>
</li>
<li>Taskq marker
<ul>
<li>Auto put in gpu queue</li>
<li>Coarse grain goes to GPU</li>
<li>CPU can steal finer grain portions</li>
</ul>
</li>
</ul>
</li>
<li>
<p>Is there a work stealing runtime to balance between?</p>
<ul>
<li>Vivek Sarkar’s group has a paper</li>
</ul>
</li>
<li>
<p>Use good scheduling to determine split, then work stealing to load
balance</p>
</li>
<li>
<p>Future work</p>
<ul>
<li>Xeon Phi
<ul>
<li>Test current impl.</li>
<li>and possible easy extensions</li>
</ul>
</li>
<li>Accel
<ul>
<li>Steal entire task</li>
<li>Then, steal partial task</li>
</ul>
</li>
<li>OpenMP
<ul>
<li>OmpSS integration</li>
<li>Switch to LLVM OpenMP (away from GOMP)</li>
<li>Work on OpenMP target/accel directives</li>
</ul>
</li>
</ul>
</li>
</ul>
</div>

To view this item, click on or cut-paste
https://charm.cs.illinois.edu/private/tms/listlog.php?param=1490#17634

--
Message generated by TMS



  • [[ppl-accel] ] [TMS] New log in Task Accel Minutes, Michael Robson, 05/02/2016

Archive powered by MHonArc 2.6.16.

Top of Page