Go Back   nV News Forums > Linux Support Forums > NVIDIA Linux

Newegg Daily Deals

Reply
 
Thread Tools
Old 02-23-12, 12:45 PM   #1
glUser
Registered User
 
Join Date: Jan 2012
Posts: 3
Question GLSL loss of precision while converting int to double

Using: OpenGL 4.2, GLSL #version 420, GeForce GT 430, driver 295.20, Linux 64 bit

precision loss occur then converting
double precision float to integer or integer to double precision float
only 24 most significant bits are kept.

This give precision loss:
(unsigned integer to double)
Code:
    uint uvalue;
    double dvalue;
    dvalue = double(uvalue);
or
(double to unsigned integer)
Code:
    uint uvalue;
    double dvalue;
    uvalue = uint(dvalue);
This is my workaround:
(unsigned integer to double)
Code:
    uint uvalue;
    double dvalue;
    dvalue = double(uvalue&0xFFFF0000U) + double(uvalue&0x0000FFFFU);
or
(double to unsigned integer)
Code:
    uint uvalue;
    double dvalue;
    dvalue = floor(dvalue);
    uvalue = (uint(floor(dvalue / 65536.0lf))<<16U) | uint(mod(dvalue, 65536.0lf));
(signed integer conversion also give precision loss)

Is this a driver bug or normal behaviour?
(like an int to float to double or double to float to int conversion)

thanks
glUser is offline   Reply With Quote
Old 02-23-12, 04:01 PM   #2
Plagman
NVIDIA Corporation
 
Plagman's Avatar
 
Join Date: Sep 2007
Posts: 254
Default Re: GLSL loss of precision while converting int to double

Hi glUser,

Thanks for the bug report; can you please attach a minimal testcase that demonstrates the issue so that I can take a closer look?

Thanks,
- Pierre-Loup
Plagman is offline   Reply With Quote
Reply


Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 03:13 AM.


Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.