GPU crunchers - boinc 6.4.5 has been released

Message boards : Number crunching : GPU crunchers - boinc 6.4.5 has been released

To post messages, you must log in.

AuthorMessage
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5662
Credit: 5,703,589
RAC: 2,175
Message 58066 - Posted: 20 Dec 2008, 19:19:24 UTC

http://boinc.berkeley.edu/download.php
"Note: if your computer is equipped with an NVIDIA Graphics Processing Unit (GPU), you may be able to use it to compute faster."
ID: 58066 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 58067 - Posted: 20 Dec 2008, 20:30:14 UTC - in response to Message 58066.  

http://boinc.berkeley.edu/download.php
"Note: if your computer is equipped with an NVIDIA Graphics Processing Unit (GPU), you may be able to use it to compute faster."

At the moment there are only two projects which use the GPU ... SaH and GPU Grid.

GPU Grid is participants may experience difficulties in getting work.

SaH, if you have the GPU enabled you will only get GPU work for the MB application meaning that if you are wanting to run more than one MB task at a time, you will have to fiddle with the system.

If you are like me running 10+ project per computer box there are no problems... you just get a GPU task. If you are up for the thrill I suggest trying 6.5.0 instead of 6.4.5 in that you actually get to run the GPU task along with keeping the regular cores saturated with work. MY i7 is running 9 tasks at the moment with 8 regular tasks and one from GPU Grid.

Of course, I have been one of those users that has had trouble getting work from GPU Grid so YMMV. Of note is that the GPU Grid project is akin to Rosetta so if you are a Rosetta freak this may be down your alley. :)
ID: 58067 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile ejuel

Send message
Joined: 8 Feb 07
Posts: 78
Credit: 4,447,069
RAC: 0
Message 58072 - Posted: 20 Dec 2008, 21:38:33 UTC - in response to Message 58066.  

http://boinc.berkeley.edu/download.php
"Note: if your computer is equipped with an NVIDIA Graphics Processing Unit (GPU), you may be able to use it to compute faster."



Thanks for the update. But what does this mean for RAH? Will RAH be upgraded to take advantage of this new GPU crunching feature? If yes, can we have an ETA?

Thanks!

-Eric
ID: 58072 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 58073 - Posted: 20 Dec 2008, 23:03:43 UTC

@Paul: Wonderful! Trying to build my i7 system as quickly as possible, just need a mobo. Question for you: If I get a mobo with 3 or 4 gpu slots, with Boinc can I then run 8 threads of "regular" Boinc wu's, and 3 or 4 gpu wu's, or am I limited to 8 "regular" and only 1 gpu?

@ejuel: Don't believe anytime in the near future. I've been a big proponent of porting Rosie to the "standardized" PS3, and given the code size and frequent changes, appears not very feasible. I "assume" that since gpu's are "non-standardized" (different mfgr's, gpu architecture, core speeds, amount of ram, etc), the same issues would also exist.
ID: 58073 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile ejuel

Send message
Joined: 8 Feb 07
Posts: 78
Credit: 4,447,069
RAC: 0
Message 58075 - Posted: 21 Dec 2008, 1:30:55 UTC - in response to Message 58073.  
Last modified: 21 Dec 2008, 1:32:05 UTC

@Paul: Wonderful! Trying to build my i7 system as quickly as possible, just need a mobo. Question for you: If I get a mobo with 3 or 4 gpu slots, with Boinc can I then run 8 threads of "regular" Boinc wu's, and 3 or 4 gpu wu's, or am I limited to 8 "regular" and only 1 gpu?

@ejuel: Don't believe anytime in the near future. I've been a big proponent of porting Rosie to the "standardized" PS3, and given the code size and frequent changes, appears not very feasible. I "assume" that since gpu's are "non-standardized" (different mfgr's, gpu architecture, core speeds, amount of ram, etc), the same issues would also exist.



Thanks for the fast reply.

With all due respect to everyone here, my response would be, "why can other projects port/run on GPU and RAH seemingly cannot?"

Maybe other projects have more funds/people/time to keep re-porting/re-writing code every iteration...I dunno. It would be interesting to hear, at least in a somewhat Project Management response from someone at RAH, why other project run under GPU.

Again, I mean no disrespect...I'm just very curious about this GPU issue/topic/problem/boundary/limitation. RAH's answer to my (more likely "our") question could also push donors to make a specific donation or allow this community to specifically address the problem. I believe I speak for a high percentage of people on this project when I say "we're here to help!"

:)
ID: 58075 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5662
Credit: 5,703,589
RAC: 2,175
Message 58076 - Posted: 21 Dec 2008, 1:38:22 UTC - in response to Message 58075.  

@Paul: Wonderful! Trying to build my i7 system as quickly as possible, just need a mobo. Question for you: If I get a mobo with 3 or 4 gpu slots, with Boinc can I then run 8 threads of "regular" Boinc wu's, and 3 or 4 gpu wu's, or am I limited to 8 "regular" and only 1 gpu?

@ejuel: Don't believe anytime in the near future. I've been a big proponent of porting Rosie to the "standardized" PS3, and given the code size and frequent changes, appears not very feasible. I "assume" that since gpu's are "non-standardized" (different mfgr's, gpu architecture, core speeds, amount of ram, etc), the same issues would also exist.



Thanks for the fast reply.

With all due respect to everyone here, my response would be, "why can other projects port/run on GPU and RAH seemingly cannot?"

Maybe other projects have more funds/people/time to keep re-porting/re-writing code every iteration...I dunno. It would be interesting to hear, at least in a somewhat Project Management response from someone at RAH, why other project run under GPU.

Again, I mean no disrespect...I'm just very curious about this GPU issue/topic/problem/boundary/limitation. RAH's answer to my (more likely "our") question could also push donors to make a specific donation or allow this community to specifically address the problem. I believe I speak for a high percentage of people on this project when I say "we're here to help!"

:)



there was another thread sometime back that said gpu's are good for speed but not for accuracy. RAH needs accuracy as well as speed. that comes from this post. also penguin has a point about the different architectures and styles of gpu's. i would "guess" that also the fact that the team keeps updating the program and that is a challenge enough for cpu's. taking on the challenge of writing for gpu's would overwhelm them.
this is a university run program and they get private funding, more for research results rather than programing. at least that how i read all the different stories on here.
ID: 58076 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile ejuel

Send message
Joined: 8 Feb 07
Posts: 78
Credit: 4,447,069
RAC: 0
Message 58078 - Posted: 21 Dec 2008, 2:06:15 UTC - in response to Message 58076.  



there was another thread sometime back that said gpu's are good for speed but not for accuracy. RAH needs accuracy as well as speed. that comes from this post. also penguin has a point about the different architectures and styles of gpu's. i would "guess" that also the fact that the team keeps updating the program and that is a challenge enough for cpu's. taking on the challenge of writing for gpu's would overwhelm them.
this is a university run program and they get private funding, more for research results rather than programing. at least that how i read all the different stories on here.


I didn't know that GPU stuff was, in my book, "lossy". Math is math to me. A+B should equal C no matter what it is "computed" on. :)

But...hey, as long as the project runs correctly on CPUs, I'll keep running it on my quad-core systems.

On a side note, Apple is supposed to be refreshing their Mac Mini in early 2009...not that I am a Mac fan but if the price is right AND they are quad-core chips, boy oh boy would that be sweet to set up a Mini farm.
ID: 58078 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 58079 - Posted: 21 Dec 2008, 2:16:17 UTC

as far as "math is math", true to an extent in distributed computing. the original Folding@Home PS3's were single precision, and it was only with the release of the then "newer" Cell BE chip in the 40GB version, that the PS3's had double precision.

while it IS up to each individual project to decide how to allocate their resources, the science behind certain methodologies just might be such that it is just not a fit for what gpu's are able to offer. for other projects, gpu's will be a godsend.

reminds me a bit of my comments in this thread

ID: 58079 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
AlphaLaser

Send message
Joined: 19 Aug 06
Posts: 52
Credit: 3,327,939
RAC: 0
Message 58083 - Posted: 21 Dec 2008, 3:18:26 UTC - in response to Message 58078.  
Last modified: 21 Dec 2008, 3:20:02 UTC

I didn't know that GPU stuff was, in my book, "lossy". Math is math to me. A+B should equal C no matter what it is "computed" on. :)


If only it were that simple :P

Even at the same float precision, different CPUs can generate different results for an operation due to differing ways in which the floating point unit is implemented in hardware, despite the existence of standards such as IEEE 754.

And then there's the precision issue itself, since math assumes the existence of an infinite number of digits but it can only be realistically stored with finite memory. So at some point, the number is truncated and the # of bits allocated for the value gives rise to single/double/quad precision numbers.

The traditional use of GPUs (gaming graphics) and the Cell BE has not enforced a need beyond single precision FP math, but as these units are pushed towards more general-purpose number crunching uses, that will eventually change.
ID: 58083 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 58086 - Posted: 21 Dec 2008, 4:04:29 UTC
Last modified: 21 Dec 2008, 4:05:33 UTC

It would be interesting to hear, at least in a somewhat Project Management response from someone at RAH, why other project run under GPU.

Please review Dr. Baker's post on CUDA and on GPUs and optimizations
Rosetta Moderator: Mod.Sense
ID: 58086 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 58279 - Posted: 31 Dec 2008, 4:46:46 UTC - in response to Message 58073.  
Last modified: 31 Dec 2008, 4:49:00 UTC

@Paul: Wonderful! Trying to build my i7 system as quickly as possible, just need a mobo. Question for you: If I get a mobo with 3 or 4 gpu slots, with Boinc can I then run 8 threads of "regular" Boinc wu's, and 3 or 4 gpu wu's, or am I limited to 8 "regular" and only 1 gpu?

@ejuel: Don't believe anytime in the near future. I've been a big proponent of porting Rosie to the "standardized" PS3, and given the code size and frequent changes, appears not very feasible. I "assume" that since gpu's are "non-standardized" (different mfgr's, gpu architecture, core speeds, amount of ram, etc), the same issues would also exist.

Sorry about the late reply...

Different people are having different experiences with GPU computing.

*IF* you have one GPU card you will, with the proper setup have 9 tasks in flight ... 8 on the i7 and one on the GPU ... I am personally using 6.5.0 and this is my experience ....

With 2 GPU installed you get 7 tasks on the CPU and 2 GPU tasks (assuming you can download the tasks from the project) because the GPU Grid application uses so much CPU that the two GPU tasks "starve" the 8th CPU task of runtime.

There is a newer version of the Grid application due out in a week or so that should drop the CPU usage back down.

My experience with SaH is limited to running about 20 tasks to completion. All I can report is that on my 9800 GT the tasks took about 9 min wall clock time to run ... but, SaH has trouble playing well with others and there are 20-30 threads in NC bewailing what is going on ...


YMMV

{edit}Oh, and make sure you have the latest Nvidia drivers ... 180.48 seems to be the best at the moment ... and a GTX280 is about 3-4 faster than a 9800 GT based on personal experience {/edit}
ID: 58279 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : GPU crunchers - boinc 6.4.5 has been released



©2024 University of Washington
https://www.bakerlab.org