Day two of the GPU Technology Conference featured a host of new CUDA-centric sessions, covering a range of topics. And, while yesterday's themeseemed to bemedical
, today libraries reign supreme.
Parallel computing through libraries is a great way to leverage GPU computing performance. Not only can domain specific libraries be specialized toprovide APIs to users familiar with a particular field, but exploit particular algorithmic properties to optimize the parallel performance. In a similar way, domain specific languages can play the same tricks at the language level. Here are several of the sessions that provided great examples of libraries and domain specific languages tuned for GPU computing:
- Designing a Geoscience Accelerator Library Accessible from High Level Languages: M.I.T's Chris Hill and Alan Richardson discussed applications that span atmosphere, ocean, geomorphology and porous media flows. They reviewed the scope of the library, its meta-programming approaches, and its key design attributes.
- CUDA Libraries Open House:Lead NVIDIA developerscoveredNVIDIA's CUDA libraries' capabilities, performance and future directions.
- Rapid Prototyping Using Thrust: Saving Lives with High Performance Dosimetry:Reps from the Atomic and Alternative Energies Commission described howthey have used the Thrust high-level library for CUDA C/C++ to quickly prototype innovative algorithms.
- Domain-Specific Languages: Luminary Hanspeter Pfister and Milos Hasan of Harvard University explained how attendees can developtheir own DSLs using source-to-source translation and a suitable backend library.