OT: A closer look at Folding@home on the GPU

Message boards : Number crunching : OT: A closer look at Folding@home on the GPU

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 29489 - Posted: 17 Oct 2006, 1:56:32 UTC
Last modified: 17 Oct 2006, 2:00:05 UTC

ID: 29489 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 29503 - Posted: 17 Oct 2006, 9:20:31 UTC - in response to Message 29489.  

"Using an extrapolated point total for two CPU clients running in parallel, which is pretty realistic given how Folding@home burdens the CPU, we'd expect to generate around 1,798 points while pulling 185.6W, which is good for close to 9.7 points per watt. The GPU client, on the other hand, generated 2,640 points and pulling 195.6W, yielding close to 13.5 points per watt.

Interestingly, with our test system running one CPU and one GPU client, we generated a total of 3,539 points pulling 228W, or 15.5 points per watt.

Unfortunately, the scoring scheme for Stanford's GPU folding client doesn't reflect the apparent processing power advantage of graphics processors like the Radeon X1900 XTX. The use of a benchmark system is consistent with how points are awarded with the CPU client. Still, if a GPU really is doing vastly more folding work than a CPU, perhaps the points system should weight GPU time more heavily."

FWIW: Points and power consumption explored

ID: 29503 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
student_

Send message
Joined: 24 Sep 05
Posts: 34
Credit: 4,268,288
RAC: 3,424
Message 29645 - Posted: 19 Oct 2006, 16:41:45 UTC

Are there plans for a GPU BOINC client, or incorporating a GPU feature into a future release? I've heard of the benefits of GPU distributed computing, but didn't realize it was so close to being implemented. GPU DC seems like it would be a significantly better investment in terms of results/$.
ID: 29645 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 29651 - Posted: 19 Oct 2006, 18:32:38 UTC

The FaH gpu client is a subset of the normal cpu client (Gromacs). What it does, it does at 20-40x faster than a cpu client. But the gpu client doesn't replace the cpu client. Also keep in mind how long it took to go from idea to actual beta client at FaH.

David Baker expressed interest in finding ways of making use of a GPU with Rosetta; and stated that the Rosetta team had been in contact with Micro$oft about developing a Rosetta client for the Xbox 360. Guess we'll just have to wait and see.


ID: 29651 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 29668 - Posted: 20 Oct 2006, 0:25:52 UTC - in response to Message 29645.  

From SlashDot:

"The Folding@Home project has put forth some impressive performance numbers with the GPU client that's designed to work with the ATI X1900. According to the client statistics, there are 448 registered GPUs that produce 29 TFLOPS. Those 448 GPUs outperform the combined 25,050 CPUs registered by the Linux and Mac OS clients. Ouch! Are ASICs really that much better than general-purpose circuits? If so, does that mean that IBM was right all along with their AS/400, iSeries product which makes heavy use of ASICs?"

Impressive GPU Numbers From Folding@Home


Are there plans for a GPU BOINC client, or incorporating a GPU feature into a future release? I've heard of the benefits of GPU distributed computing, but didn't realize it was so close to being implemented. GPU DC seems like it would be a significantly better investment in terms of results/$.

ID: 29668 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Astro
Avatar

Send message
Joined: 2 Oct 05
Posts: 987
Credit: 500,253
RAC: 0
Message 29677 - Posted: 20 Oct 2006, 5:47:48 UTC

It appears Dr anderson has been contacted by ATI about a GPU Boinc client or some form of GPU useage within projects. Here's a fwd posting of an email from ATI to him that he posted on a mail list. I'm not sure how sensitive this is so I clipped out emails and full names.

[Fwd: Working with the BOINC development team to create a ATI GPU (graphics processor unit) accelerated version of your BOINC engine/client] Inbox

xxxxxxssl.berkeley.edu to Boinc
More options 6:12 pm (7 hours ago)

[Note to BOINC projects: GPUs (like ATI and NVIDIA) can provide
lots of compute power (for some apps).
It may take a fair amount of programming effort to achieve this.
If you're interested, check out this offer of technical help from ATI.
-- David ]

-------- Original Message --------
Subject: Working with the BOINC development team to create a ATI GPU
(graphics processor unit) accelerated version of your BOINC engine/client
Date: Thu, 19 Oct 2006 17:44:03 -0400
From: Andrew XXXXXXX <XXXXXXX@ati.com>
To: <xxxxxxx@ssl.berkeley.edu>



Hi David,



My name is Andrew XXXXXXXX, I work at ATI Technologies Inc. as software
product manager, and ATI would be very interested in working with you on
developing a GPU accelerated version of BOINC



One of ATI’s new initiatives is to find new uses for the GPU (other than
just rendering graphics) – due to the parallel processing nature of the
GPU, there are a number of other applications that can benefit from
using the GPU.



We have recently worked with Stanford University on their distributed
computing project Folding@Home, and the new GPU accelerated client has
significantly increased the speed of protein folding. I’ve included a
couple links below with more information:

* http://www.ati.com/buy/promotions/folding/index.html
* http://folding.stanford.edu/FAQ-ATI.html



If you are interested in working with ATI on creating an ATI GPU
accelerated version of BOINC let me know, and I’ll put you in touch with
the right developers (who will gladly show you how ATI GPUs (basically
through a custom “API” that we’ve developed specifically for GPGPU
applications) can help you increase the performance of any BOINC project).



I’m not sure if you’d be interested in updating the BOINC client in
general (so all projects using BOINC could use the GPU), or just working
on a BOINC specific project (SETI@Home being the most important one).
ATI would gladly help out in either case.



Thanks, and I hope to hear back from you,



Andrew
ID: 29677 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Keck_Komputers
Avatar

Send message
Joined: 17 Sep 05
Posts: 211
Credit: 4,246,150
RAC: 0
Message 29684 - Posted: 20 Oct 2006, 8:16:06 UTC

The BOINC client is basically already set up for using the GPU, however actually doing it is up to the projects. The client collects data on the host. The project uses this to see if the right card is there. The project has to do is set up a task with the non_cpu_intensive flag set. That task would then use the GPU directly.

As nice as it would be I still don't think BOINC will be able to support GPUs transparently to the projects for the forseeable future. The differences in instruction sets and precision seem very difficult to overcome.
BOINC WIKI

BOINCing since 2002/12/8
ID: 29684 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 29689 - Posted: 20 Oct 2006, 8:35:22 UTC - in response to Message 29677.  

It appears Dr anderson has been contacted by ATI about a GPU Boinc client or some form of GPU useage within projects. Here's a fwd posting of an email from ATI to him that he posted on a mail list. I'm not sure how sensitive this is so I clipped out emails and full names.

[Fwd: Working with the BOINC development team to create a ATI GPU (graphics processor unit) accelerated version of your BOINC engine/client] Inbox

xxxxxxssl.berkeley.edu to Boinc
More options 6:12 pm (7 hours ago)

[Note to BOINC projects: GPUs (like ATI and NVIDIA) can provide
lots of compute power (for some apps).
It may take a fair amount of programming effort to achieve this.
If you're interested, check out this offer of technical help from ATI.
-- David ]

-------- Original Message --------
Subject: Working with the BOINC development team to create a ATI GPU
(graphics processor unit) accelerated version of your BOINC engine/client
Date: Thu, 19 Oct 2006 17:44:03 -0400
From: Andrew XXXXXXX <XXXXXXX@ati.com>
To: <xxxxxxx@ssl.berkeley.edu>



Hi David,



My name is Andrew XXXXXXXX, I work at ATI Technologies Inc. as software
product manager, and ATI would be very interested in working with you on
developing a GPU accelerated version of BOINC



One of ATI’s new initiatives is to find new uses for the GPU (other than
just rendering graphics) – due to the parallel processing nature of the
GPU, there are a number of other applications that can benefit from
using the GPU.



We have recently worked with Stanford University on their distributed
computing project Folding@Home, and the new GPU accelerated client has
significantly increased the speed of protein folding. I’ve included a
couple links below with more information:

* http://www.ati.com/buy/promotions/folding/index.html
* http://folding.stanford.edu/FAQ-ATI.html



If you are interested in working with ATI on creating an ATI GPU
accelerated version of BOINC let me know, and I’ll put you in touch with
the right developers (who will gladly show you how ATI GPUs (basically
through a custom “API” that we’ve developed specifically for GPGPU
applications) can help you increase the performance of any BOINC project).



I’m not sure if you’d be interested in updating the BOINC client in
general (so all projects using BOINC could use the GPU), or just working
on a BOINC specific project (SETI@Home being the most important one).
ATI would gladly help out in either case.



Thanks, and I hope to hear back from you,



Andrew


Hopefully one of the mods can give D. Baker a nudge to get involved as well ;-) They already have folding experience, and having AMD behind you cannot be bad. (note to people that do not know, AMD bought ATI)
Team mauisun.org
ID: 29689 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 29690 - Posted: 20 Oct 2006, 8:36:36 UTC - in response to Message 29689.  

It appears Dr anderson has been contacted by ATI about a GPU Boinc client or some form of GPU useage within projects. Here's a fwd posting of an email from ATI to him that he posted on a mail list. I'm not sure how sensitive this is so I clipped out emails and full names.

[Fwd: Working with the BOINC development team to create a ATI GPU (graphics processor unit) accelerated version of your BOINC engine/client] Inbox

xxxxxxssl.berkeley.edu to Boinc
More options 6:12 pm (7 hours ago)

[Note to BOINC projects: GPUs (like ATI and NVIDIA) can provide
lots of compute power (for some apps).
It may take a fair amount of programming effort to achieve this.
If you're interested, check out this offer of technical help from ATI.
-- David ]

-------- Original Message --------
Subject: Working with the BOINC development team to create a ATI GPU
(graphics processor unit) accelerated version of your BOINC engine/client
Date: Thu, 19 Oct 2006 17:44:03 -0400
From: Andrew XXXXXXX <XXXXXXX@ati.com>
To: <xxxxxxx@ssl.berkeley.edu>



Hi David,



My name is Andrew XXXXXXXX, I work at ATI Technologies Inc. as software
product manager, and ATI would be very interested in working with you on
developing a GPU accelerated version of BOINC



One of ATI’s new initiatives is to find new uses for the GPU (other than
just rendering graphics) – due to the parallel processing nature of the
GPU, there are a number of other applications that can benefit from
using the GPU.



We have recently worked with Stanford University on their distributed
computing project Folding@Home, and the new GPU accelerated client has
significantly increased the speed of protein folding. I’ve included a
couple links below with more information:

* http://www.ati.com/buy/promotions/folding/index.html
* http://folding.stanford.edu/FAQ-ATI.html



If you are interested in working with ATI on creating an ATI GPU
accelerated version of BOINC let me know, and I’ll put you in touch with
the right developers (who will gladly show you how ATI GPUs (basically
through a custom “API” that we’ve developed specifically for GPGPU
applications) can help you increase the performance of any BOINC project).



I’m not sure if you’d be interested in updating the BOINC client in
general (so all projects using BOINC could use the GPU), or just working
on a BOINC specific project (SETI@Home being the most important one).
ATI would gladly help out in either case.



Thanks, and I hope to hear back from you,



Andrew


Hopefully one of the mods can give D. Baker a nudge to get involved as well ;-) They already have folding experience, and having AMD behind you cannot be bad. (note to people that do not know, AMD bought ATI)


The other side of this is most the new gen consoles use ATI as their graphics processor, so you maybe able to use it to benefit that.
Team mauisun.org
ID: 29690 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 29693 - Posted: 20 Oct 2006, 12:02:54 UTC - in response to Message 29677.  

In the grand scheme of things, I don't know how timing is going to impact all of this. It's possible that beginning in about 18 months the issue may be moot. If true, is it worth expending significant resources to create code for putting BOINC projects on stand-alone GPUs?

AMD-ATI to make a GPU on a CPU

It appears Dr anderson has been contacted by ATI about a GPU Boinc client or some form of GPU useage within projects. Here's a fwd posting of an email from ATI to him that he posted on a mail list. I'm not sure how sensitive this is so I clipped out emails and full names.

ID: 29693 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1829
Credit: 115,751,040
RAC: 58,251
Message 29695 - Posted: 20 Oct 2006, 12:54:22 UTC - in response to Message 29693.  

In the grand scheme of things, I don't know how timing is going to impact all of this. It's possible that beginning in about 18 months the issue may be moot. If true, is it worth expending significant resources to create code for putting BOINC projects on stand-alone GPUs?


I think it will certianly still be worthwhile as the GPU that will be integrated will probably be relatively low-end as it's aimed at OEMs to replace current integrated graphics solutions. It'll also be at least a few years before that's a reality in any major numbers.

I'm sure the integrated GPU features will be useful if programs can make use of these GPU functions without requiring coding changes or if only minor compiler changes are required, but for the mid to high end seperate GPUs will be around for a while yet ;)

ID: 29695 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mats Petersson

Send message
Joined: 29 Sep 05
Posts: 225
Credit: 951,788
RAC: 0
Message 29696 - Posted: 20 Oct 2006, 13:07:44 UTC

I don't know much about how the integrated GPU will actually work, but I strongly suspect that the difference will be small to the current generation of product, except that the signals from the CPU goes more directly to the GPU, rather than going via a number of extra steps - but to software, the GPU will still be a GPU, and not an integrated part of the processor per se.

So, whilst the communication to the GPU itself will possibly be faster (should be!), it will still require some pretty special software.

Another problem with applies to just about all of the GPU's is that they have a pretty long latency from the start of a calculation until the result comes out. That's because GPU's are designed with VERY long pipelines (and by long, I don't mean Pentium 4 long, but something like 3-10x that of a P4). The calculation capacity is phenominal, but it's based on doing the same thing over and over on the similar data. That would probably work for SOME of the Rosetta calculations, but in some other instances, it may not...

--
Mats
ID: 29696 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 29704 - Posted: 20 Oct 2006, 15:51:35 UTC - in response to Message 29696.  

I don't know much about how the integrated GPU will actually work, but I strongly suspect that the difference will be small to the current generation of product, except that the signals from the CPU goes more directly to the GPU, rather than going via a number of extra steps - but to software, the GPU will still be a GPU, and not an integrated part of the processor per se.

So, whilst the communication to the GPU itself will possibly be faster (should be!), it will still require some pretty special software.

Another problem with applies to just about all of the GPU's is that they have a pretty long latency from the start of a calculation until the result comes out. That's because GPU's are designed with VERY long pipelines (and by long, I don't mean Pentium 4 long, but something like 3-10x that of a P4). The calculation capacity is phenominal, but it's based on doing the same thing over and over on the similar data. That would probably work for SOME of the Rosetta calculations, but in some other instances, it may not...

--
Mats


That's how it is done at F@H, it only does some of the calculations and acts in reallity as a co-processor. That's the jist I get from it anyway. But if it can accelerate any part of it and graphics cards are only going to get faster (ok GPU type graphics cards, i.e. programmable one's) then it should benefit. Especially if they're going to help do the hard work :-)

Though I don't have an ATI card, but an NVIDIA one, maybe they'll help translate to thiers. Though Microsoft has an 'Accelerator' technology in the research labs which I think is suppose to make this cross 'graphics' platform. [I think]
Team mauisun.org
ID: 29704 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Seventh Serenity

Send message
Joined: 30 Nov 05
Posts: 18
Credit: 87,811
RAC: 0
Message 29710 - Posted: 20 Oct 2006, 16:18:14 UTC

With all this news with GPU clients possibly coming to BOINC projects and already being out for Folding@Home, I wish ATI would improve their Linux fglrx driver to be as good as the driver for Windows and to also allow the GPU to run Folding@Home & BOINC projects (when done).

Otherwise, I'll still have my doubts about ATI being a good company.
"In the beginning the universe was created. This made a lot of people very angry and is widely considered as a bad move." - The Hitchhiker's Guide to the Galaxy
ID: 29710 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile River~~
Avatar

Send message
Joined: 15 Dec 05
Posts: 761
Credit: 285,578
RAC: 0
Message 29805 - Posted: 22 Oct 2006, 7:56:46 UTC - in response to Message 29710.  

With all this news with GPU clients possibly coming to BOINC projects and already being out for Folding@Home, I wish ATI would improve their Linux fglrx driver to be as good as the driver for Windows and to also allow the GPU to run Folding@Home & BOINC projects (when done).

Otherwise, I'll still have my doubts about ATI being a good company.


AMD (who now own ATI) were very supportive to the linuxbios project, I understand. I think if some Linux volunteers start foing it then ATI/AMD will be forthcoming on the technical background. Whether that would go as far as writing code I'm not so sure.

In contrast, Intel refused to release vital info to the linuxbios project and at one time there was a call to boycott intel as a result.

btw linuxbios is a project to use linux cli as the bios - Google it for more info

River~~
ID: 29805 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Tarx

Send message
Joined: 2 Apr 06
Posts: 42
Credit: 103,468
RAC: 0
Message 29875 - Posted: 23 Oct 2006, 14:42:10 UTC - in response to Message 29704.  

Though I don't have an ATI card, but an NVIDIA one, maybe they'll help translate to thiers. Though Microsoft has an 'Accelerator' technology in the research labs which I think is suppose to make this cross 'graphics' platform. [I think]

The current gen of NVIDIA cards (6xxx and 7xxx series) was originally using for the GPU folding client but were found to be much slower (X1600Pro possibly faster than the 7900GTX) than the ATI X1xxx series cards due to several technical reasons (e.g. cache coherence). As they didn't want to support/develop on two code paths (one for ATI and one for NVIDIA) when the ATI cards were so much faster, they are not planning to support/release the NVIDIA GPU client for the 6xxx/7xxx series (8xxx series however might be supported once it is properly examined).
Now the current NVIDIA cards are still very fast, so depending on the requirements needed, they could well be a good fit for other code.
ID: 29875 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Seventh Serenity

Send message
Joined: 30 Nov 05
Posts: 18
Credit: 87,811
RAC: 0
Message 29901 - Posted: 23 Oct 2006, 22:09:13 UTC

I think Folding@Home will be looking into the new nVidia core, the G80. I've heard it contains 128 shaders - the ATI X1950XTX has 48.
"In the beginning the universe was created. This made a lot of people very angry and is widely considered as a bad move." - The Hitchhiker's Guide to the Galaxy
ID: 29901 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 29905 - Posted: 23 Oct 2006, 23:02:10 UTC

The G80 had 96 shaders for some models, and 128 shaders for the high end part in the german review I read. Hopefully, they've ironed out the other issues that caused the nVidia chips to perform so poorly over at FaH.

With the competition to provide better crunching between ATI and nVidia, it'll hopefully make any portion of the Rosetta app that is ported to a GPU client just scream along.. if the Rosetta Team gets a GPU coding guru on staff.
ID: 29905 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
student_

Send message
Joined: 24 Sep 05
Posts: 34
Credit: 4,268,288
RAC: 3,424
Message 29916 - Posted: 24 Oct 2006, 3:00:05 UTC

I wonder whether the Rosetta team would reallocate resources from developing the Rosetta game (where users guide proteins to a good conformation) to developing a GPU client. The latter certainly seems like a better investment- not only in terms of a 20-40x increase in FLOPS, but probably also a larger attraction to new users/retention of existing users. Perhaps the two projects' development wouldn't conflict.
ID: 29916 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
|MatMan|

Send message
Joined: 10 Oct 05
Posts: 3
Credit: 1,602,625
RAC: 0
Message 30057 - Posted: 26 Oct 2006, 14:38:49 UTC - in response to Message 29690.  


The other side of this is most the new gen consoles use ATI as their graphics processor, so you maybe able to use it to benefit that.

That's wrong. The PS3 will use a graphics subsystem from nVidia.

In general I'm not very satisfied with the current development of the GPU client at F@H because only very specific GPUs are supported. They use a proprietary (ATI specific) driver interface which could possibly be changed or incompatible with their next generation of GPUs. That is the reason why you can't run their GPU client on nVidia cards. -> They would have to develop a new code path for nVidia GPUs which is crap. So the official statement that nVidia cards would be to slow is only one side of the story.

I mean they get some impressive performance numbers with a few hundred X1900 but this can't be the right way to bring GPGPU processing power to the masses. An application that benefits from GPUs should run on any SM3 GPU!

I really do hope that BOINC and/ or rosetta won't use that ATI specific driver interface in future GPGPU efforts though it would lead to results in a shorter time period but not in the long run I think.

Yes currently I own a nVidia graphics card but that might change when I buy a new one...

ID: 30057 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
1 · 2 · Next

Message boards : Number crunching : OT: A closer look at Folding@home on the GPU



©2024 University of Washington
https://www.bakerlab.org