HEY, nVidia just released a new science software language for distributed GPUs!

Message boards : Number crunching : HEY, nVidia just released a new science software language for distributed GPUs!

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Faye Kane
Avatar

Send message
Joined: 13 Oct 06
Posts: 9
Credit: 61,614
RAC: 0
Message 35364 - Posted: 23 Jan 2007, 5:55:12 UTC

Whoever does the programming should know that nVidia just released a new programming language called CUDA specifically for doing distributed science programming on its graphic cards:

"GPU computing with CUDA is a new approach to computing where hundreds of on-chip processor cores simultaneously communicate and cooperate to solve complex computing problems up to 100 times faster than traditional approaches.

"This complete development environment gives developers the tools they need to solve new problems in computation-intensive applications.

"The CUDA Software Developers Kit (SDK) is currently available to developers and researchers through the NVIDIA registered developer program."

...and it's free.

This should make it MUCH easier than the kludgy way it is done on ATI cards. For more info, see:

http://www.nvidia.com/object/IO_37226.html

-- faye
ID: 35364 · Rating: 1 · rate: Rate + / Rate - Report as offensive    Reply Quote
Tom Philippart
Avatar

Send message
Joined: 29 May 06
Posts: 183
Credit: 834,667
RAC: 0
Message 35381 - Posted: 23 Jan 2007, 12:42:00 UTC

the first needs to be a boinc version for GPUs before any project app can be done.
ID: 35381 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 35409 - Posted: 23 Jan 2007, 18:41:18 UTC - in response to Message 35381.  

the first needs to be a boinc version for GPUs before any project app can be done.


BOINC would run on the CPU as it does now, the science app is the only thing that needs altering.

Also BOINC already identifies the 'accelerator' the computer has so that is not a problem.
Team mauisun.org
ID: 35409 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Who?

Send message
Joined: 2 Apr 06
Posts: 213
Credit: 1,366,981
RAC: 0
Message 35412 - Posted: 23 Jan 2007, 19:49:05 UTC - in response to Message 35409.  

the first needs to be a boinc version for GPUs before any project app can be done.


BOINC would run on the CPU as it does now, the science app is the only thing that needs altering.

Also BOINC already identifies the 'accelerator' the computer has so that is not a problem.


GPU are not design or QAed for full time work load ... on the top of this, the G80 fan is pretty loud ...

I expect to hear horror story about GPU smoke ... let's see :)

this is my personal opinion, it does not involve my employer

who?
ID: 35412 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 35413 - Posted: 23 Jan 2007, 20:16:21 UTC - in response to Message 35412.  

the first needs to be a boinc version for GPUs before any project app can be done.


BOINC would run on the CPU as it does now, the science app is the only thing that needs altering.

Also BOINC already identifies the 'accelerator' the computer has so that is not a problem.


GPU are not design or QAed for full time work load ... on the top of this, the G80 fan is pretty loud ...

I expect to hear horror story about GPU smoke ... let's see :)

this is my personal opinion, it does not involve my employer

who?



Hey, it nvidia giving the tools for the nvidia gpu's so it their money ;-)



Team mauisun.org
ID: 35413 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Tom Philippart
Avatar

Send message
Joined: 29 May 06
Posts: 183
Credit: 834,667
RAC: 0
Message 35415 - Posted: 23 Jan 2007, 20:34:52 UTC

I have a 8800GTX and a 6800GT here and compared the 8800GTX is much more silent, you can live with it.

Let's keep this on topic though
ID: 35415 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Seventh Serenity

Send message
Joined: 30 Nov 05
Posts: 18
Credit: 87,811
RAC: 0
Message 35424 - Posted: 23 Jan 2007, 22:22:13 UTC

Read about this before. Far better idea than what ATI are doing (ATI have stuck to the X1600/X1800/X19## series).
"In the beginning the universe was created. This made a lot of people very angry and is widely considered as a bad move." - The Hitchhiker's Guide to the Galaxy
ID: 35424 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 52
Message 35425 - Posted: 23 Jan 2007, 22:42:27 UTC - in response to Message 35424.  

I thought ATI was going to do well with the upcoming R600/R700 for these type of science apps. No?

Read about this before. Far better idea than what ATI are doing (ATI have stuck to the X1600/X1800/X19## series).

ID: 35425 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Faye Kane
Avatar

Send message
Joined: 13 Oct 06
Posts: 9
Credit: 61,614
RAC: 0
Message 35428 - Posted: 23 Jan 2007, 23:13:03 UTC - in response to Message 35415.  
Last modified: 24 Jan 2007, 0:10:25 UTC

I have a 8800GTX and a 6800GT here and compared the 8800GTX is much more silent, you can live with it.

You have a 8800GTX? WOOH! I just got a 7800GS, and only because I have AGP. It's the fastest AGP card made, but pales in comparison to the 8800. I never played a computer game; I just like screwing around with 3D graphics (and technology in general).

I say that in 5 years, an 8800-class video chip (or better) will be standard in all motherboards. It will be automatically disabled when you buy a "good' video card(!) By that time, GPU coprocessing will be routine and BOINC work will all be done on an array of shader units, leaving the CPU completely free to do 2D user work like spreadsheets and net browsing.

ALSO:

I have no idea why everyone talks about fan noise! Put the PC under the desk then! Do these people have to leave the room when the air conditioner cuts on or their office mate switches on a desk fan? It's a soft "whooshing" noise, it's not like it's rap music. Jee-zuss!

It reminds me of people getting worked up about how much electricity their computer uses. Nobody even THINKS about how much their TV uses, not to mention their furnace!

And "radiation" from CRT monitors, and radio waves from cell phones... what IS it with people being superstitious about electronics??

-- Techno Faye, who says techno "Hey!"
Techno-Faye says techno "Hey!"
My home page:
http://www.myspace.com/150103974
ID: 35428 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Marek Pióro

Send message
Joined: 23 Jan 07
Posts: 6
Credit: 133,874
RAC: 91
Message 35440 - Posted: 24 Jan 2007, 10:36:51 UTC

I dont like any half solutions :)) so i bought and an easy install CPC water Cooler with passive radiator (2500W cooler power) (no need to use any other enery ...ony computer CPU & GPU heat) i puged in disk radiator,chassis and ..etc. average CPU temp is 45C GPU 38C disc and oters 34C ... with completly NO NOISE :))).
you can have powerfull CPU&GPU and sitll sit in silence. :) i have woter cooler since 2005. with no problem. almost every at 100%CPU Usage. measure noice form comp in 10cm distance is 10DB. ....yours thougths are louder :)))
Yours.
if you have any Q. ask at greymark@poczta.onet.pl
Marek
ID: 35440 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Faye Kane
Avatar

Send message
Joined: 13 Oct 06
Posts: 9
Credit: 61,614
RAC: 0
Message 35469 - Posted: 24 Jan 2007, 23:56:02 UTC - in response to Message 35440.  
Last modified: 25 Jan 2007, 0:16:47 UTC

I dont like any half solutions
me neither, but I prefer simple ones, so...

i bought and an easy install CPC water Cooler with passive radiator (2500W cooler power)
I stuck my PC out the window (and it's WAY freezing outside).

I put it on a table out on the porch. i ran the cables under the slightly-open window, with a towel blocking the air (I had to get extension monitor and keybd cables). For the hell of it, I also took the case off and put a small desk fan aimed at the inside of the system.

average CPU temp is 45C. GPU is 38C
As I type,my CPU is 19C and GPU 21C. Under high benchmark load, CPU is 28C and GPU 31C, which is below body temperature and therefore would be cool to the touch if you could get under the heat sink.

I'm clocking my 2.4 GHz CPU at 3.0 GHz only because my board doesn't go any higher. I'm running the GPU at 8MHz faster than anyone has ever run that GPU (that I could find on the internet). My northbridge chip is cold and the thin memory modules are, judging by touching them, frozen solid (i.e. below freezing). The CPU is at max multiplier (18) and all all memory timings are on absolute fastest.

The whole sys is rock stable, and I'm getting 3D benchmarks on an AGP card that are typical of a high-end GeForce 7900-series card.

... with completely NO NOISE
Mine too!

Then again, it IS outside the house...

This is working SO well that when the temperature gets warm I'm going to buy a little $50 refrigerator, drill a hole in the side for cables, and turn it up to max (which would freeze food). If I get similar performance, I'll spray paint the refrigerator, slap a hot rod decal on it, snap some pix, and sell an article to Computer Power User.

== techno Faye, who says techno "Hey!"
Techno-Faye says techno "Hey!"
My home page:
http://www.myspace.com/150103974
ID: 35469 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 35470 - Posted: 25 Jan 2007, 0:18:49 UTC

The folks over at Folding at Home were working with nVidia on getting the client to work on nVidia GPUs but ran into so many problems and poor performance that they ended up switching to the ATI GPUs to get it to work at a decent speed.

We were quoted that the new ATI GPU clients were 20-40 times faster than the old client - but most of us missed the fact that it is 20-40 times faster at what it does; however, it's a subset of the normal cpu client. There was code in the cpu client that couldn't take advantage of the gpu abilities and they didn't use that in the gpu client. It doesn't speed up everything.

It'd be nice to see what optimization is available with the nVidia API - and whether that will prolong the life of some of our older AGP capable crunchers on this project. But it won't be 100 times faster (with the latest nVidia card) than the current Rosetta client while performing everything the current client does.




ID: 35470 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Faye Kane
Avatar

Send message
Joined: 13 Oct 06
Posts: 9
Credit: 61,614
RAC: 0
Message 35478 - Posted: 25 Jan 2007, 3:56:03 UTC - in response to Message 35470.  
Last modified: 25 Jan 2007, 4:39:32 UTC

The folks over at Folding at Home were working with nVidia on getting the client to work on nVidia GPUs but ran into so many problems and poor performance that they ended up switching to the ATI GPUs to get it to work at a decent speed.

Yeah, but wasn't that, like, over a year ago, before the GPU API and integrated development environment?

There was code in the CPU client that couldn't take advantage of the gpu abilities and they didn't use that in the GPU client. It doesn't speed up everything.

Mmmmm... I dunno. Let's see what it looks like in five years. Everyone always thinks only in terms of today's hardware.

when I was a kid I heard about Moore's law and excitedly figured out how long it would take for computers to be a thousand times faster (20 years). I was stunned. Then I realized that no one was thinking in terms of what could happen if computers were THOUSANDS of times faster. Look at a PC magazine from that time. "Tron" was state of the art, and it was rendered offline on a mainframe. No one thought about photorealistic 3D animation at all. And for GAMES? Nobody even considered it! Today, my telephone can render that movie in real time.

Sure, FAH rejected nVidia GPU processing, and other distributed projects rejected ALL GPU processing. But they rejected 2006 GPU processing, with it's narrow floating point registers and dozen shaders.

HOWEVER:

No one will say it out loud, but Moore's law has already ground to a halt for CPUs because general-purpose computing is linear. So Intel and AMD went parallel. But no one is buying multicore devices except for special-purpose apps like servers. Graphics, however, are HIGHLY parallel... and so is scientific computing.

In a very few years, the graphic chip in a common motherboard will be THOUSANDS of times faster at parallel tasks than the CPU. Why do I think that?

Because for the hell of it, I just typed the release dates and texel fill rates of nVidia GPUs into a spreadsheet. Through the magic of the log base 2 function I can reveal to the world that the last four doubling times for GPU horsepower were:

24 months
17 months
16 months
9 months

And that doesn't even take SLI into account, or putting multiple GPUs on one card (which is already being done).

If the doubling time were six months, then we'd have GPUs which are a thousand times faster in just five years. In that time, I doubt that single-core CPUs will be even twice as fast.

I'd bet A WHOLE DOLLAR that soon, scientific computing will be done exclusively on the (vastly-more-versatile) shader units of video cards, with the CPU doing just dispatch.

Techno-Faye says techno "Hey!"
My home page:
http://www.myspace.com/150103974
ID: 35478 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Marek Pióro

Send message
Joined: 23 Jan 07
Posts: 6
Credit: 133,874
RAC: 91
Message 35524 - Posted: 25 Jan 2007, 23:26:01 UTC
Last modified: 25 Jan 2007, 23:27:45 UTC

Good evening...or everything else you've got outside the window :) Faye Kane.

i agree, GPU is effective procesing unit. BUT..always is "but". :) CPU is created to do everything (everything means nothing very good central=cementary) and GPU is (i' not sure... :)) designed for nice looking elements :)) hehe which means graphic=good looking..not procesing boinc.
its a joke.
may someday GPU will be pluged into CPU or CPU into GPU we'll see. but now GPU have to has an interpreter to work efective with Boinc-like aplications. which means it is A BIT slower than CPU. ... if i'm wrong ...i sorry for my stupidity. :) and you can treat me with garlic :))))

... simple solution for cooling CPU? :))) i admire your creativity. i pluged water coller cause it's cheap (CPC offer it for 100USD) and effective.. i've got this comp in my wardrobe without the window... and temperature inside is 26-28C. so? simple solution .... break throu the wall... like(your (mine too) favorite music group. :)) ) and put comp outside the window :)). thats the reason why i invested in water :) i hope not like in "watergate" :)
i pluged in this cooler to CPU athlon xp 3000 runs at 2,2 GHz and saphire ATI 9550 to power supply, chassis, hard disk and it works.
... i thought if i can put water coller to my 400Mhz MDA Qtek Palmophone (pocked PC) but then it will be two pocket PC :))) joke.

Have a nice WINTER :) if you will need more towels to cool down the PC i offfeeerrrrr mine. :) dont be angry. im very smiley man... smiley=stupid? :))
YOurs
Marek
ID: 35524 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Faye Kane
Avatar

Send message
Joined: 13 Oct 06
Posts: 9
Credit: 61,614
RAC: 0
Message 35538 - Posted: 26 Jan 2007, 6:30:07 UTC - in response to Message 35524.  

marek:

Does the water cooler actively COOL the water, or just make it less hot?

That is, can it ever cool the CPU/GPU below ambient air temperature (like a referigerator does), or is it really, just an expensive heat sink?

== techno faye, who says techno "Hey!"
Techno-Faye says techno "Hey!"
My home page:
http://www.myspace.com/150103974
ID: 35538 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Tom Philippart
Avatar

Send message
Joined: 29 May 06
Posts: 183
Credit: 834,667
RAC: 0
Message 35569 - Posted: 26 Jan 2007, 18:08:57 UTC - in response to Message 35524.  
Last modified: 26 Jan 2007, 18:09:34 UTC


may someday GPU will be pluged into CPU or CPU into GPU we'll see.


AMD is already developing this with ATI. It will a multi-core cpu with a gpu in one single chip. They said it will be released in 2008 or 2009. They will be called "Fusion".

ID: 35569 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Marek Pióro

Send message
Joined: 23 Jan 07
Posts: 6
Credit: 133,874
RAC: 91
Message 35575 - Posted: 26 Jan 2007, 20:22:03 UTC
Last modified: 26 Jan 2007, 21:04:00 UTC


ID: 35575 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Faye Kane
Avatar

Send message
Joined: 13 Oct 06
Posts: 9
Credit: 61,614
RAC: 0
Message 35583 - Posted: 26 Jan 2007, 23:36:55 UTC - in response to Message 35575.  
Last modified: 26 Jan 2007, 23:55:43 UTC

try to find some with 400W (0,4 kWh/1h) cool power (not 1kWh/24h as normal A class refrigerator has)�K it could be more expensive than 50 USD. 400W is max power usage of standard (tuned up) comp.

WELLL let's just think about that! Yes, 1 kwh per day is the power consumption of a refrigerator, but:

a) that's power consumption, not cooling capacity, and

b) that's for large kitchen refrigerators.

But, you say, both those factors would make the cooling capacity of a small refrigerator LESS, not more.

you're right again! however,

a) I was actually going to use a small freezer, not a refrigerator, which has considerably more cooling capacity.

b) power consumption is specified at a particular "duty cycle" -- the percent of time the device is actually running. What is this for a kitchen freezer? I don't know, but for advertising purposes the frigidaire company is going to want to minimize the stated power consumption, and so is going to use a wildly optimistic duty cycle.

For a standalone freezer, unlike a refrigerator, they are probably going to assume it get opened, like, never. The duty cycle they use will certainly assume the freezer is full of frozen food, which makes it almost unnecessary to run the compressor. we can multiply their numbers by the reciprocal of the duty cycle they use since, being hacker wastrels hellbent on improving CPU performance at any cost, we don't care about the environment and are willing to let this thing run all the time.

c) Even if you can't freeze your PC solid (as I did by putting it outside), remember that heat flowing across different temperature gradients is NOT linear! A heat sink will pull MUCH more heat with 10 deg C air blowing over it than room temperature. The point being that the colder you make the air, the more value you get from each degree of cold.

d) ANY cooling is an improvement over letting the thing sit in ambient air.

Unfortunately, the relevant numbers aren't immediately available for any but lab freezers, which are FAR more powerful (like, minus 30C)

BUT:
There's a simpler way to look at it. for $11, you can get 3 inch-square thermoelectric panels that pump 170 watts of heat to one side of the plate. I screwed around with some and they're amazing. Within seconds of plugging it in, one side gets as hot as a frying pan and the other side, a quarter-inch away, gets ice buildup. if you lick your finger and touch the cold side before the ice condenses, your finger will stick to the surface.

Two of them would remove all the heat from a PC in a sealed chamber, and adding more would cool the chamber as cold as you like. You do have to use huge heat sinks though. I had one that weighed about five pounds. Put a desk fan in front of something like that when it's frying-pan hot and it dissipates a LOT of heat!
I agree. You've got great idea to find something new, effective, good looking, etc. And if you will succeed, I will be first person who buy your patent

Nice thought, but I'm just in it for the hack. If I wanted to make money, I'd go among those horrid human creatures and get a job again, instead of screwing around all day with electronic gizmos and my kitty cat.
Techno-Faye says techno "Hey!"
My home page:
http://www.myspace.com/150103974
ID: 35583 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 35584 - Posted: 26 Jan 2007, 23:43:39 UTC

Correct me if I'm wrong - but Moore's observation was that the number of transisters composing a cpu was doubling every so often. The original time frame has shortened a few times since his observation was first published. 2 years, 18 months, and perhaps we've hit the 12 month mark by now. It's not about frequency.. or the crunching power of the cpu. It wasn't about size - although all 3 of those are also improving and helping keep the observation in effect.

Putting 2,4,8,16, or 32 cores on a cpu is just as valid a way of increasing the ability of the cpu (for multithreaded apps - or those of us using all those extra cores for DC projects) as increasing the frequency and power usage (NetBurst!!!), and has doubled, quadrupled, and octupled the number of transisters on the cpu.

GPUs have had an even shorter doubling period - and now you can have video cards eating up 4 times or more the power usage of the high end cpu they're being driven by.. :) But when they're 1000 times faster than they are today, that still only means that they can do what they do today.. at 1000 times the speed. Everything they haven't been optimized for will run at much reduced speeds.


ID: 35584 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Faye Kane
Avatar

Send message
Joined: 13 Oct 06
Posts: 9
Credit: 61,614
RAC: 0
Message 35587 - Posted: 27 Jan 2007, 0:27:07 UTC - in response to Message 35584.  
Last modified: 27 Jan 2007, 0:56:55 UTC

Correct me if I'm wrong - but Moore's observation was that the number of transistors composing a cpu was doubling every so often.
Yes, you're correct! But "horsepower" maps almost linearly to transistor count (yes, yes, the pentium is much more efficient than the 8088. But that efficiency is due to transistor count).
The original time frame has shortened a few times since his observation was first published. 2 years, 18 months, and perhaps we've hit the 12 month mark by now.
I wish I knew who started that "18 month" rumor, but it JUST isn't true! See: http://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Moore_Law_diagram_%282004%29.png/700px-Moore_Law_diagram_%282004%29.png
Note that the last several points on that graph are for DUAL CORE processors. Look at the points on the graph again. Ignore the dotted lines and the last few data points and tell me what you see. You see a curve which clearly bends DOWNWARD - the doubling time is increasing.
It's not about frequency
Correct!
or the crunching power of the cpu.
Incorrect! For linear computation, it is. And that's what's going to drive consumer CPUs. I just don't see the performance of the family computer improving by 32x on a 32 core CPU. And despite the valiant and noble efforts of you dual-core DC record-breakers, DC is done mainly on common family PCs, not multicore servers. I can't see the gamily PC ever having 128 cores, but this evening you can drive to CompUSA and buy a video card with 128 general-purpose devices capable of running fluid dynamics equations independently.
GPUs have had an even shorter doubling period
Extra true! In an earlier post I calculated it to have gone from 24 months to nine months in just 5 years. I didn't post the graph I made, but it's clear as day from looking at the log-base-2 chart that unlike the CPU doubling function, the rate for GPUs is increasing (the graph isn't a line, but an upward-sloping curve). My *guess* from purely visual examination was that in about a year, GPU doubling time would be 6 months, which is why I used that number in my earlier post.
and now you can have video cards eating up 4 times or more the power usage of the high end cpu they're being driven by
Yeah, isn't that a RIOT! And the best is yet to come. I had to order a 650-watt power supply just to make my new graphics card run, which meant I had to stare at it for a week without using it. Aggh! And I read a post by a guy who had 2 GeForce 8800's in SLI mode who said that no matter how you measured it: by weight, power consumption, or computing horsepower, his machine was basically a big graphics vector engine with 256 parallel processors and some insignificant stuff (like a motherboard) attached to support them.
But when they're 1000 times faster than they are today, that still only means that they can do what they do today.. at 1000 times the speed. Everything they haven't been optimized for will run at much reduced speeds.
Uhhh... I dunno about that! Everybody thinks in terms of what computers are TODAY. Shaders are becoming vastly more general-purpose. Just five years ago there were only pixel and vertex shaders, Then geometry shaders, and now physics shaders. The latest thing is integrated shaders, where any unit can perform any function.

Sure, the CPU will still dispatch tasks to the shaders, as they do in nVidia's new shader-for-science language. Nevertheless, I think these things will become arrays of hundreds (or even thousands) of little general purpose vector processors using IEEE floating point... and very soon too!
Techno-Faye says techno "Hey!"
My home page:
http://www.myspace.com/150103974
ID: 35587 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
1 · 2 · Next

Message boards : Number crunching : HEY, nVidia just released a new science software language for distributed GPUs!



©2022 University of Washington
https://www.bakerlab.org