18-03-2017, 04:51 AM
Professor Ahmad Afsahi
Parallel Processing Research Laboratory
Department of Electrical and Computer Engineering Queen’s University Kingston, ON
PhD Graduate Student Positions:
The Parallel Processing Research Laboratory (PPRL) within the Department of Electrical and Computer Engineering at Queen’s University currently has openings for outstanding PhD graduate students. The laboratory is active in research in high-performance communication runtime and system software for homogeneous and heterogeneous computing systems with their high-speed interconnects. The candidates should have a strong background in system software (operating systems, algorithms, data structures, networking, computer architecture, compilers/middleware), and programming skills. Experience with parallel programming paradigms at implementation or application level is an asset. Please send your complete application including a cover letter, CV, statement of research, sample publications (if any), in a single PDF file to Prof. Afsahi at ahmad.afsahi@queensu.ca. Additional information about PPRL can be found below.
Research Keywords:
Parallel and distributed computing, network-based high-performance computing, cluster computing, highperformance interconnects and communication subsystems, communication runtime, system software, hybrid systems with accelerators (GPUs, Intel Xeon Phi), parallel programming models (MPI, MPI+X, PGAS, UPC, OpenSHMEM, OpenMP, CUDA, OpenCL, OpenACC, etc.), power-aware high-performance computing, workload characterization, benchmarking and performance evaluation
Research Summary:
Research in the area of high-performance computing (HPC) systems has been primarily focused on how to improve the performance of computers in order to solve computationally intensive problems and support the emerging networking and commercial applications. Parallel processing is at the heart of such powerful computers. The Parallel Processing Research Laboratory at Queen’s carries out research in the main areas of parallel and distributed processing, network-based high-performance computing, and power-aware high-performance computing. We are interested in the various innovative techniques that could be effectively used at different layers to enhance the performance, and/or to minimize the power/energy consumption in high-end parallel computing systems with their high-performance interconnects. Our research, in part, seeks to propose, design and evaluate novel techniques for high-performance communication, communication runtime and systems software for high-performance clusters and data centers. We are interested in research along several directions to support efficient and scalable execution of parallel programming models such as MPI, MPI+X, PGAS (UPC, OpenSHMEM), CUDA, OpenCL, and OpenACC, and application that use them on top of traditional and hybrid (GPU, Xeon Phi) HPC systems with their high-performance interconnects such as InfiniBand, Omni-Path, iWARP Ethernet, RoCE, and proprietary interconnects. In addition, with the increasing amount of power/energy consumption in high-performance computing and data centers, our research is, in part, concerned with proposing novel ideas to reduce power consumption and improve energy efficiency with little or no impact on performance. Workload characterization of scientific, engineering and commercial applications as well as benchmarking and performance evaluation of parallel programming paradigms and high-performance computing systems is an integral part of our research.