nV News Forums


nV News Forums (http://www.nvnews.net/vbulletin/index.php)
-   NVIDIA GeForce 200 Series (http://www.nvnews.net/vbulletin/forumdisplay.php?f=60)
-   -   Would you like to ask Nvidia a question? (http://www.nvnews.net/vbulletin/showthread.php?t=140053)

ChrisRay 10-15-09 01:55 AM

Would you like to ask Nvidia a question?
Would you like to ask Nvidia a question?

As a member of the user group along with other user group members I have been responsible for watching trends and delivering feedback to them. I have been trying ((with other user group members)) to get Nvidia to interact more freely in the community.

In An attempt to do this we are now fielding so many questions a week that Nvidia will attempt to respond too. At Nzone me and Amorphous will be going over the questions and trying to get the most prevalent and relevant questions answered.

And Nvidia will be supplying a spot on Nzone ((page is being built)) for the answered questions.


Update: Please post at the Nzone thread. This is important for this to work because I alone will not be cooridinate questions and feedback


Greetings everyone. I would like to take this chance to invite Nvidia customers and enthusiasts to ask Nvidia a question.

1) Is a latest trend or development on your mind?

2) Have a question about Nvidia hardware?

3) Have a question about The Way Its Meant to be played?

Me and Amorphous will be allowing questions that Nvidia will be answering. Please keep in mind that not "all" questions are going to be answered. Me and Amorphous will try and pick and choose the best questions and the most supported subjects. There are of course some limitations on this. We will not be able to field questions regarding unreleased products and products Nvidia does not support. ((Example Radeon Questions)). Also, be wary that if we feel the question has already been answered, we will link you back to the question's answer.

This is an exciting chance for us to help you communicate your questions, concerns, and feedback to Nvidia. It also will help Nvidia have better interaction with the community. Nvidia will be providing a spot to "answer" these questions in the near future. Which will update/amend into this post once its available. We will try to submit 3 to 5 questions a week and if successful Nvidia is committed to continue doing this. The amount of answers received will also depend on the amount of questions asked.

Remember. Me And Amorphous will be closely monitoring this thread. Do not troll in this thread. It will not be allowed.

Final Note: This is not a "Debate" Thread. If you wish to debate the answers that are recieved. Then please do so in another thread. This post is specifically for asking questions and receiving answers. Not Arguing or debating them. You can discuss that how you feel free anywhere in this community or another.

ChrisRay 10-16-09 11:39 PM

Re: Would you like to ask Nvidia a question?
Just an update. We will be submitting questions to Nvidia every week. And we should have answers by the weeks end. Question submittions go in monday.

ChrisRay 10-23-09 02:57 PM

Re: Would you like to ask Nvidia a question?
First line of questions for October 23rd answered. We will continue to be fielding questions every week with the hope of answering 3 or more questions a week.

1. Is NVIDIA moving away from gaming and focusing more on GPGPU? We have heard a lot about Fermi's compute capability, but nothing of how good it is for gamers.

Jason Paul, GeForce Product Manager: Absolutely not. We are all gamers here! But, like G80 and G200 before, Fermi has two personalities: graphics and compute. We chose to introduce Fermi’s compute capability at our GTC conference, which was very compute-focused and attended by developers, researchers, and companies using our GPUs and CUDA for compute-intensive applications. Such attendees require fairly long lead times for evaluating new technologies, so we felt it was the right time to unveil Fermi’s compute architecture. Fermi has a very innovative graphics architecture that we have yet to unveil.

Also, it’s important to note that our reason for focusing on compute isn’t all about HPC. We believe next generation games will exploit compute as heavily as graphics. For example:

· Physical simulation – whether using PhysX, Bullet or Direct Compute, GPU computing can add incredible dynamic realism to games through physical simulation of the environment.

· Advanced graphical effects – compute shaders can be used to speed up advanced post-processing effects such as blurs, soft shadows, and depth of field, helping games look more realistic

· Artificial intelligence – compute shaders can be used for artificial intelligence algorithms in games

· Ray Tracing – this is a little more forward looking, but we believe ray tracing will eventually be used in games for incredibly photo-realistic graphics. NVIDIA’s ray tracing engine uses CUDA.

Compute is important for all of the above. That’s why Fermi is built the way it is, with a strong emphasis on compute features and performance.

In addition, we wouldn’t be investing so heavily in gaming technologies if we were really moving away from gaming. Here’s a few of the substantial investments NVIDIA is currently making in PC gaming:

· PhysX and 3D Vision technologies

· The Way it’s Meant to be Played program, including technical support, game compatibility testing, developer tools, antialiasing profiles, ambient occlusion profiles, etc.

· LAN parties and gaming events (including PAX, PDX LAN, Fragapalooza, Million Man LAN, Blizzcon, and Quakecon to name a few recent ones) Attached are some links to videos from those event.





We put our money where our mouth is here.

Finally, Fermi has plenty of “traditional” graphics goodness that we haven’t talked about yet. Fermi’s graphics architecture is going to blow you guys away! Stay tuned.

2. Why Has NVIDIA continued to refresh the G92? Why didn't NVIDIA create an entry level GT200 piece of hardware? The constant G92 renames and reuse of this aging part have caused a lot of discontent amongst the 3D enthusiast community.

Jason Paul, GeForce Product Manager: We hear you. We realize we are behind with GT200 derivative parts, and we are doing our best to get them out the door as soon as possible. We invested our engineering resource in transitioning our G9x class products from 65nm to 55nm manufacturing technology as well as adding several new video and display features to GT 220/210, which put these GT200-derivative products later in time than usual. Also, 40nm capacity has been limited, which has made the transition more difficult.

Since its introduction, G92 has remained a strong price/performance product in our line-up. So why did we rebrand it? While hardware enthusiasts often look at GPUs in terms of the silicon core (i.e. G92) and architecture (i.e. GT2xx), many of our less techie customers instead think about GPUs simply in terms of performance, price, and feature set, summarized via the product name. The product name is an easy way to communicate how products with the same base feature set (i.e. DirectX 10 support) compare to each other in terms of price and performance. Let’s take an example – what is the higher performance product, a 8800 GT or a 9600 GT? The average joe looking at an OEM web configurator or Best Buy retail shelf probably won’t know the answer. But if they saw a 9800 GT and a 9600 GT, they would know that a 9800 GT would provide better performance. By keeping G92 branding current with the rest of our DirectX 10 product line-up, we were able to more effectively communicate to customers where the product fit in terms of price and performance. At the same time, we tried to make it clear to technical press that these new brands were based on the G92 core so enthusiasts would know this information up front.

3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA’s official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?

Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can’t really give PhysX away for “free” for the same reason why a Havok license or x86 license isn’t free—the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.

4. Is NVIDIA fully committed to supporting 3D Vision for the foreseeable future with consistent driver updates or will we see a decrease in support as appears to be the current trend to many 3D Vision users? For example. A lot of games have major issues with Shadows while running 3D Vision. Can profiles fix these issues or are we going to have to rely on developers to implement 3D Vision compatible shadows? What role do developers play in having a good 3D Vision experience at launch?

Andrew Fear, 3D Vision Product Manager: NVIDIA is fully committed to 3D Vision. In the past four driver releases, we have added more than 50 game profiles to our driver and we have seeded over 150 3D Vision test setups to developers worldwide. Our devrel team works hard to evangelize the technology to game developers and you will see more developers ensuring their games work great with 3D Vision. Like any new technology, it takes time and not every developer is able to intercept their development/release cycles and make changes for 3D Vision. In the specific example of shadows, sometimes these effects are rendered with techniques that need to be modified to be compatible with stereoscopic 3D, which means we have to recommend users disable them. Some developers are making the necessary updates, and some are waiting to fix it in their next games.

In the past few months we have seen our developer relations team work with developers to make Batman: Arkham Asylum and Resident Evil 5 look incredible in 3D. And we are excited now to see new titles that are coming – such as Borderlands, Bioshock 2, and Avatar – that should all look incredible in 3D.

Game profiles can help configure many games, but game developers spending time to optimize for 3D Vision will make the experience better. To help facilitate that, we have provided new SDKs for our core 3D Vision driver architecture that lets developers have almost complete control over how their game is rendered in 3D. We believe these changes, combined with tremendous interest from developers, will result in a large growth of 3D Vision-Ready titles in the coming months and years.

In addition to making gaming better, we are also working on expanding our ecosystem to support better picture, movie, and Web experiences in 3D. A great example is our support for the Fujifilm FinePix REAL 3D W1 camera. We were the first 3D technology provider to recognize the new 3D picture file format taken by the camera and provide software for our users. In upcoming drivers, you will also see even more enhancements for a 3D Web experience.

5) Could Favre really lead the Vikings to a Superbowl?

Ujesh Desai, Vice President of GeForce GPU Business: We are glad that the community looks to us to tackle the tough questions, so we put our GPU computing horsepower to work on this one! After simulating the entire 2009-2010 NFL football season using a Tesla supercomputing cluster running a CUDA simulation program, we determined there is a 23.468% chance of Favre leading the Vikings to a Superbowl this season.* But Tesla supercomputers aside, anyone with half a brain knows the Eagles are gonna finally win it all this year! J

*Disclaimer: NVIDIA is not liable for any gambling debts incurred based on this data.

General Lee 10-23-09 11:01 PM

Re: Would you like to ask Nvidia a question?
:lol2: @ gambling disclaimer

noko 10-25-09 02:56 AM

Re: Would you like to ask Nvidia a question?
Seems like good answers to me. I actually like now Nvidia is pushing GPU computing but not only pushing but suceeding. ATI/AMD talk alot about it but I don't see much from it.

As for PhysX, it would probably be alot cheaper for ATI to adopt it and use it then what it already cost Nvidia to develope it, market it, support it etc.. Needless to say alot easier for a number of developers to have a ready working standard being used. Alot of this I see that ATI/AMD would get it free, easy use of the hard work that went to a number of titles that use PhysX. The downside is I get the feeling ATI/AMD just doesn't trust Nvidia in this which I hope is not the case.

Once again it is Nvidia pushing technology as in Raytracing in games which is too early to tell if they will be successful, I know Intel had this goal but Intel is even falling behind the ball more so then Nvidia.

Now the derivatives of the 200 series not only worry me but also maybe fall short once derived compared to ATI/AMD offerings. I hope Fermi is real strong and easy to down scale into the lower price categories otherwise I think Nvidia will have somewhat of a rough time next generation of gaming cards.

Virtually everytime Nvidia goes down a course, they suceed and what I mean is new technology and their use. The problem with that on the PC side of things is, well like PhysX, it causes problems with the other partners making PCs and software for it. Plug and play standards for hardware and software (as much as possible) has made the pc what it is today. Not saying unique items never occured but on items that everyone needs (graphics card or GPU) common standards rule. To keep this short I wish Nvidia would start a whole new computing platform using their GPU's for computing and graphics in general. Nvidia in this case whould never be held back and would only be limited to what they could achieve by themselves. I think they would excel at this in no uncertain terms and time is ripe again for something other then a Mac or Windowed PC, Linux has it's own issues with open standards. Maybe a new Amiga or Atari ST but of course much beter :D.

ChrisRay 11-02-09 05:59 PM

Re: Would you like to ask Nvidia a question?
Latest Questions answered. Due to the forum downtime. We wont be submitting anymore till next week. But we got a reply from jen-hsun.


Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?

Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and “integration” is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.

We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.

The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.

We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond “programmable shading” to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, “computational graphics” will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. “Computational graphics" requires the GPU to have two personalities – one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.

While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.

To enable game developers to create the next generation of amazing games, we’ve created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. We’ve created a tool platform called Nexus, which integrates into Visual Studio and is the world’s first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a “co-processing” configuration. And we’ve encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.

Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.

NVIDIA’s growth strategy is simple and singular: be the absolute best in the world in visual computing – to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.

For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as “the essential gear of gamers for universal domination” is now off to really save the world.

Keep in touch.


Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world? Will PhysX become open-source?

Tom Petersen, Director of Technical Marketing: NVIDIA supports and encourages any technology that enables our customers to more fully experience the benefits of our GPUs. This applies to things like CUDA, DirectCompute and OpenCL—APIs where NVIDIA has been an early proponent of the technology and contributed to the specification development. If someday a GPU physics infrastructure evolves that takes advantage of those or even a newer API, we will support it.

For now, the only working solution for GPU accelerated physics is PhysX. NVIDIA works hard to make sure this technology delivers compelling benefits to our users. Our investments right now are focused on making those effects more compelling and easier to use in games. But the APIs that we do that on is not the most important part of the story to developers, who are mostly concerned with features, cost, cross-platform capabilities, toolsets, debuggers and generally anything that helps complete their development cycles.

Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?

Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :-P). We’ll share more details when we introduce Fermi’s graphics architecture shortly!

ChrisRay 12-03-09 05:40 AM

Re: Would you like to ask Nvidia a question?
We should have an update for this pretty soon. The holidays caused a significant delay.

ChrisRay 12-03-09 08:52 PM

Re: Would you like to ask Nvidia a question?
Heres the latest questions. Some felt the original PhysX answer wasn't satisfactory so we asked them to answer it again.


#1 - How do you expect PhysX to compete in a DirectX 11/OpenCL world?

By Tom Petersen, Director of Technical Marketing: PhysX does not compete with OpenCL or DX11’s DirectCompute.

PhysX is an API and runtime that allows games and game engines to model the physics in a game. Think of PhysX as a layer above OpenCL or DirectCompute, which in contrast are very generic and low level interfaces that enable GPU-accelerated computation. Game developers don’t create content in OpenCL or DirectCompute. Instead they author in toolsets (some of which are provided by NVIDIA) that allow them to be creative quickly. Once they have good content they “compile” a specific platform (PC, Wii, Xbox, PS3, etc) using another tool flow.

During this process game studios have three basic concerns:

1. Does PhysX make it easier to develop games for all platforms – including consoles?

2. Does PhysX make it easier to have kick ass effects in my game?

3. Will NVIDIA support my efforts to integrate this technology?

And the answer to the three questions above is: yes, yes, and yes. We are spending our time and money pursuing those goals to support developers, and right now the developer community is not telling us that OpenCL or DirectCompute support are required.

In the future this may or may not change, and the dynamics of this situation are hard to predict. We can say this though:

1. AMD and Intel are not investing today at the same pace as NVIDIA in GPU accelerated physics.

2. AMD and Intel will need to do the bulk of the work required to support GPU accelerated PhysX on their products. NVIDIA is not going to do QA or design for AMD or Intel.

At the end of the day, the success of PhysX as a technology will depend on how easy it is for game designers to use and how incredible the game effects are that they create. Batman: Arkham Asylum is a good example of the type of effects we can achieve with PhysX running on NVIDIA GPUs, and we are working to make the next round of games even more compelling. At this time, NVIDIA has no plan to move from CUDA to either OpenCL or DirectCompute as the implementation engine for GPU acceleration. Instead we are working to support developers and implement killer effects.

So does NVIDIA profit from all this? We sure hope so. If we make our GPUs more desirable because they do incredible things with PhysX, then we have done a great job for our customers and made PC gaming more compelling.

#2 - Will PhysX become open-source?

Tom Petersen: NVIDIA is investing a lot of time and effort in PhysX and we do not plan to make it open source today. Of course the binaries for the SDK are distributed for free, and source code is available for licensing if game designers need it.

All times are GMT -5. The time now is 02:39 PM.

Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.
Copyright ©1998 - 2014, nV News.