View Single Post
Old 09-19-02, 01:14 AM   #5
Chalnoth
Registered User
 
Join Date: Jul 2002
Posts: 1,293
Default

Personally, I think it matters a huge amount which direction is taken today. After all, the direction that is taken today will effect events years down the line.

As a quick case in point, think of the x86 architecture. This instruction set has, quite simply, held back the PC industry by huge amounts.

For example, if it would have been possible at the time the x86 code was originally released to have all software run in a multiplatform emulation layer (akin to Java), there would be performance problems at first, but in the long run, things would be running far faster than they are today. Of course, back then, there just wasn't enough performance to spare, so it wasn't an option.

Btw, if you doubt this, just look at API's used in 3D rendering. In particular, consider a proprietary API: Glide. That API was much faster than Direct3D or OpenGL in the majority of games, but today hardware has more than picked up that slack. While OpenGL and Direct3D may never be quite as fast as a lower-level, vendor-specific API, they end up being faster in the long run because of the ease of development and the ability for hardware developers to change much more of the hardware than with a lower-level API (one of the reasons I believe the Voodoo4/5 was so behind technologically...clinging to Glide made it much harder to implement new features...).

Today, it's a similar scenario. That is, a good game designer might be able to write a shader that is more efficient on today's hardware than any compiler could do.

But that same assembly-language program might end up running much slower than one compiled by a higher-level language for the next generation of hardware.

The other thing to keep an eye on is program length. The simple fact is that it quickly becomes impractical to write nothing but assembly-language shaders as programs approach hundreds of lines of assembly, let alone optimal ones. In this situation, a compiler can do a much better job at both keeping the code readable (minimizing programming errors), and optimization of code.

By contrast, it looks right now like the direction DX9 is taking is to attempt to standardize the assembly, leaving HLSL as sort of an "extra." This means, to me, that HLSL is put it not as a primary development tool, but more as a prototyping tool (i.e. runtime compiling will be uncommon).
Chalnoth is offline   Reply With Quote