NVidia has published the agenda for the upcoming GPU Technology Conference online, and it looks like an amazing collection of talks. Many people today claim GPU's are unnecessary for compute because the CPU can match it in speed, but leave out the part about new and future algorithms that could easily put the power back in the GPU court. Looks like NVidia has gone out of their way to collect speakers to drive this point home. Look at some of the briefs. CFD Simulations:
Shockingly Fast and Accurate CFD Simulations (#2078) ' Timothy Warburton, Rice University
In the last three years we have demonstrated how GPU accelerated discontinuous Galerkin methods have enabled simulation of time-dependent, electromagnetic scattering from airplanes and helicopters. In this talk we will discuss how we have extended these techniques to enable GPU accelerated simulation of supersonic airflow as well.
Nearly Instantaneous Reconstruction for MRIs (#2094) ' General Electric
GE's Autocalibrating Reconstruction for Cartesian Imaging (ARC) is a computationally intensive, widely used algorithm in MRI Reconstruction using Parallel Imaging. We demonstrate that an optimized CUDA implementation of ARC on a GPU can enable nearly instantaneous reconstruction and speedups of up to 10x over an optimized dual socket QuadCore CPU implementation. We will discuss challenges both with computational intensity and data read/write efficiency. We will also compare the Fermi C2050 with the C1060.
The Large Hadron Collider:
Processing Petabytes per Second at the Large Hadron Collider at CERN (#2135) ' Philip Clark, University of Edinburgh; Andy Washbrook, University of Edinburgh
Learn how GPUs could be adopted by the ATLAS detector at the Large Hadron Collider (LHC) at CERN. The detector, located at one of the collision points, must trigger on unprecedented data acquisition rates (PB/s), to decide whether to record the event, or lose it forever. In the beginning we introduce the ATLAS experiment and the computational challenges it faces. The second part will focus on how GPUs can be used for algorithm acceleration ' using two critical algorithms as exemplars. Finally, we will outline how GPGPU acceleration could be exploited and incorporated into the future ATLAS computing framework.
Entertainment (You didn't think I'ld leave you out, did you?):
Developing GPU Enabled Visual Effects For Film And Video (#2125) ' Bruno Nicoletti, The Foundry; Jack Greasley, The Foundry
The arrival of fully programmable GPUs is now changing the visual effects industry, which traditionally relied on CPU computation to create their spectacular imagery. Implementing the complex image processing algorithms used by VFX is a challenge, but the payoffs in terms of interactivity and throughput can be enormous. Hear how The Foundry's novel image processing architecture simplifies the implementation of GPU-enabled VFX software and eases the transition from a CPU based infrastructure to a GPU based one.
Rendering Revolution (#2165) ' Ken Pimentel, Autodesk
Learn how GPU technologies are transforming the making of pixels. This talk will cover GPU-centric rendering techniques that leverage both the raw computational capabilities of NVIDIA's GPUs and advanced pixel-shading techniques for interactive visualization and rendering.
Looks like a great event this year, be sure to check out the entire agenda
or by session
. What else do you see that looks interesting? Post in the comments!
Reach out to the community of Visualization and Graphics Experts by Advertising on VizWorld.com
- GPU Technology Conference Agenda
- finalRender R4 SE Technology Sneak Preview at SIGGRAPH
- NASCAR Live 3D Streaming Online