View Full Version : gpuprime v0.58b x86 (WinXP)

Hi,

I've just made an initial beta release of gpuprime and decided to open this release to an entirely public beta test. The gpuprime is a general-purpose computing project to searching prime numbers on NVIDIA G80 and G90-based GPUs. Prime numbers are computed parallel using kernel (fragment) threads.

If you want to try gpuprime then here is access to the beta download:

gpuprime v0.58b x86 (WinXP) (http://www.skenegroup.net/fornvidiots/gc/gpuprime/gpuprime_v0.58b.rar) (69 KB)

Release notes

------------------------

* This is a debug version. Non-debug version of gpuprime is 10 times faster with much larger number of digits!

* General-purpose computing on NVIDIA G80 and G90-based GPUs (GeForce 8 series)

* Parallel and unified computing with kernel (fragment) threads

* --kernelsize (160x160x4) and --primesearches (100000 digits) arguments are locked

* Multi-GPU (SLI) support disabled

* gpuprime is based on GPUSPE (gpu stream processing evaluation) and GC (gpu computing)

* In order to test the GPU stability, increase the number of test iterations (20 or more)

* 160.xx or newer NVIDIA ForceWare (Winxp2k) drivers recommended

Result:

GeForce 8800 Ultra - computation time: 337,1475 ms

(snowcool)

cool, any estimate on how long that task would take on any given CPU?

Just curious about the contrast.

With same prime algorithm the QX6850 is at least 50 times (4 threads) slower than the 8800U. This prime algorithm is not efficient algorithm for computing primes on cpu, but on gpu this scales very well :)

imo, in general best prime algorithms for computing primes on cpu are the sieve of Atkin, the sieve of Eratosthenes and some FFT based. I haven't tested these prime algorithms on gpu yet.

pnarciso

12-13-07, 09:27 AM

Post the command line options please.

Post the command line options please.

gpuprime v0.59b x86 win32 (gpgpu)

Usage: gpuprime [-Options..]

Options

-g, --gpuid=STRING [(e)xtensions]

Print out gpuid and driver version

(e)xtensions: returns a list of supported GL extensions

-p, --primesearches=SIZE [1-16000000, default: 100000]

Number of prime searches.

-k, --kernelsize=SIZE [2-256, default: 160]

Size of kernel batch.

-i, --iterations=SIZE [default: 1]

Number of test iterations.

-S, --sqrt=NUMBER [0-1, default: 1]

Use square root (sqrt) in prime search.

-s, --saveresult=STRING [(t)ext/(h)tml/(b)oth, default: text]

Save test result(s) to disk as text or HTML format

(t)ext: text format [named gpuprimes_results.txt

(h)tml: HTML format [named gpuprimes_results.html

(b)oth: save result(s) in text and HTML formats

-v, --viewresult

View test result(s).

-h, --help

Print the list of all available command line options.

-d, --disclaimer

Print out license and disclaimer.

Happy New Year, Folks (nana2)

here is some information about next beta release:

GCP (GPU Computing Primes)

- General-purpose computing (primes) on NVIDIA G90, G80, G70 and NV40-based GPUs (GeForce 8-6 series)

- The GCP is based on GPUSPE (gpu stream processing evaluation) and GC (gpu computing)

- Easy GUI, no more messing with console

- Many new computing arguments, e.g. primes, kernel type and grid size

- ASM (ISA) and GLSL kernel types supported

- Parallel and unified computing with kernel (fragment) threads

- The computing kernel (ASM/ISA) is over 830x faster than the old one and 50% faster than the fastest kernel (the sieve of Atkin / the sieve of Eratosthenes) on cpu

- GPU performance (1K, 10K, 32K, 64K, 128K, 256K, 512K, 1M) and stability tests (looping)

- Easy-to-read computing result file with checksum

- Multi-GPU (SLI) and CUDA supports will be addressed in a later releases

Computing result with GeForce 8800 Ultra:

Computing prime numbers 2>100000...

[Jan 01 21:05] Iteration: 1/1> Iteration time: 0,41 ms - Search: Pass

!edited wrong test result

!2ndedit G70 and NV40 supports added

Crisao23

02-01-08, 09:12 PM

Awesome !

Thanks for this proggie.

EciDemon

02-18-08, 03:53 AM

Just to clarify, is this app used to stresstest or benchmark gpu ?

A good stresstester is what I need to check for artifacts etc while testing various overclocks.

Hi guys, Oops... over year has gone, sorry about that :eek:

Well I've just released an initial beta of CUDA Primes (dev name).

If you want to try CUDA Primes then here is access to the beta download:

cudaprimes v0.3a x86 (http://www.skenegroup.net/fornvidiots/gc/gpuprime/cudaprimes_v0.3a.zip) (WinXP/Vista) (298 KB)

Release notes

------------------------

* This is a debug version, but still DAMN FAST! :)

* Probably doesn't work with Windows 7.

* General-purpose computing on NVIDIA GT200, G90 and G80-based GPUs.

* Parallel and unified computing with CUDA kernel threads.

* Search primes between 1 and 100000 (locked in debug version).

* Multi-GPU (SLI) support disabled.

* In order to test the GPU stability, run burn.bat (10000 iterations). You may increase the number of test iterations by editing run.bat.

* 180.xx or newer NVIDIA ForceWare (Winxp2k/Vista) drivers with CUDA support recommended.

* UPDATE!! Coming 0.4a release will fix problems with timer. Estimated performance - about 15x faster than the gpuprime (OpenGL-based) and almost 30x faster than the fastest kernel (the sieve of Atkin / the sieve of Eratosthenes) on Intel Core i7 920 (2.66@3 GHz) processor.

EDIT! meh, result removed because of buggy timer function... it was just too fast !DEBUG! :)

Have fun!

vBulletin® v3.7.1, Copyright ©2000-2015, Jelsoft Enterprises Ltd.