Message boards : Number crunching : OT: A closer look at Folding@home on the GPU
Author | Message |
---|---|
The_Bad_Penguin Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
|
The_Bad_Penguin Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
"Using an extrapolated point total for two CPU clients running in parallel, which is pretty realistic given how Folding@home burdens the CPU, we'd expect to generate around 1,798 points while pulling 185.6W, which is good for close to 9.7 points per watt. The GPU client, on the other hand, generated 2,640 points and pulling 195.6W, yielding close to 13.5 points per watt. Interestingly, with our test system running one CPU and one GPU client, we generated a total of 3,539 points pulling 228W, or 15.5 points per watt. Unfortunately, the scoring scheme for Stanford's GPU folding client doesn't reflect the apparent processing power advantage of graphics processors like the Radeon X1900 XTX. The use of a benchmark system is consistent with how points are awarded with the CPU client. Still, if a GPU really is doing vastly more folding work than a CPU, perhaps the points system should weight GPU time more heavily." FWIW: Points and power consumption explored |
student_ Send message Joined: 24 Sep 05 Posts: 34 Credit: 4,737,409 RAC: 525 |
Are there plans for a GPU BOINC client, or incorporating a GPU feature into a future release? I've heard of the benefits of GPU distributed computing, but didn't realize it was so close to being implemented. GPU DC seems like it would be a significantly better investment in terms of results/$. |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
The FaH gpu client is a subset of the normal cpu client (Gromacs). What it does, it does at 20-40x faster than a cpu client. But the gpu client doesn't replace the cpu client. Also keep in mind how long it took to go from idea to actual beta client at FaH. David Baker expressed interest in finding ways of making use of a GPU with Rosetta; and stated that the Rosetta team had been in contact with Micro$oft about developing a Rosetta client for the Xbox 360. Guess we'll just have to wait and see. |
The_Bad_Penguin Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
From SlashDot: "The Folding@Home project has put forth some impressive performance numbers with the GPU client that's designed to work with the ATI X1900. According to the client statistics, there are 448 registered GPUs that produce 29 TFLOPS. Those 448 GPUs outperform the combined 25,050 CPUs registered by the Linux and Mac OS clients. Ouch! Are ASICs really that much better than general-purpose circuits? If so, does that mean that IBM was right all along with their AS/400, iSeries product which makes heavy use of ASICs?" Impressive GPU Numbers From Folding@Home Are there plans for a GPU BOINC client, or incorporating a GPU feature into a future release? I've heard of the benefits of GPU distributed computing, but didn't realize it was so close to being implemented. GPU DC seems like it would be a significantly better investment in terms of results/$. |
Astro Send message Joined: 2 Oct 05 Posts: 987 Credit: 500,253 RAC: 0 |
It appears Dr anderson has been contacted by ATI about a GPU Boinc client or some form of GPU useage within projects. Here's a fwd posting of an email from ATI to him that he posted on a mail list. I'm not sure how sensitive this is so I clipped out emails and full names. [Fwd: Working with the BOINC development team to create a ATI GPU (graphics processor unit) accelerated version of your BOINC engine/client] Inbox xxxxxxssl.berkeley.edu to Boinc More options 6:12 pm (7 hours ago) [Note to BOINC projects: GPUs (like ATI and NVIDIA) can provide lots of compute power (for some apps). It may take a fair amount of programming effort to achieve this. If you're interested, check out this offer of technical help from ATI. -- David ] -------- Original Message -------- Subject: Working with the BOINC development team to create a ATI GPU (graphics processor unit) accelerated version of your BOINC engine/client Date: Thu, 19 Oct 2006 17:44:03 -0400 From: Andrew XXXXXXX <XXXXXXX@ati.com> To: <xxxxxxx@ssl.berkeley.edu> Hi David, My name is Andrew XXXXXXXX, I work at ATI Technologies Inc. as software product manager, and ATI would be very interested in working with you on developing a GPU accelerated version of BOINC One of ATI’s new initiatives is to find new uses for the GPU (other than just rendering graphics) – due to the parallel processing nature of the GPU, there are a number of other applications that can benefit from using the GPU. We have recently worked with Stanford University on their distributed computing project Folding@Home, and the new GPU accelerated client has significantly increased the speed of protein folding. I’ve included a couple links below with more information: * http://www.ati.com/buy/promotions/folding/index.html * http://folding.stanford.edu/FAQ-ATI.html If you are interested in working with ATI on creating an ATI GPU accelerated version of BOINC let me know, and I’ll put you in touch with the right developers (who will gladly show you how ATI GPUs (basically through a custom “API” that we’ve developed specifically for GPGPU applications) can help you increase the performance of any BOINC project). I’m not sure if you’d be interested in updating the BOINC client in general (so all projects using BOINC could use the GPU), or just working on a BOINC specific project (SETI@Home being the most important one). ATI would gladly help out in either case. Thanks, and I hope to hear back from you, Andrew |
Keck_Komputers Send message Joined: 17 Sep 05 Posts: 211 Credit: 4,246,150 RAC: 0 |
The BOINC client is basically already set up for using the GPU, however actually doing it is up to the projects. The client collects data on the host. The project uses this to see if the right card is there. The project has to do is set up a task with the non_cpu_intensive flag set. That task would then use the GPU directly. As nice as it would be I still don't think BOINC will be able to support GPUs transparently to the projects for the forseeable future. The differences in instruction sets and precision seem very difficult to overcome. BOINC WIKI BOINCing since 2002/12/8 |
FluffyChicken Send message Joined: 1 Nov 05 Posts: 1260 Credit: 369,635 RAC: 0 |
It appears Dr anderson has been contacted by ATI about a GPU Boinc client or some form of GPU useage within projects. Here's a fwd posting of an email from ATI to him that he posted on a mail list. I'm not sure how sensitive this is so I clipped out emails and full names. Hopefully one of the mods can give D. Baker a nudge to get involved as well ;-) They already have folding experience, and having AMD behind you cannot be bad. (note to people that do not know, AMD bought ATI) Team mauisun.org |
FluffyChicken Send message Joined: 1 Nov 05 Posts: 1260 Credit: 369,635 RAC: 0 |
It appears Dr anderson has been contacted by ATI about a GPU Boinc client or some form of GPU useage within projects. Here's a fwd posting of an email from ATI to him that he posted on a mail list. I'm not sure how sensitive this is so I clipped out emails and full names. The other side of this is most the new gen consoles use ATI as their graphics processor, so you maybe able to use it to benefit that. Team mauisun.org |
The_Bad_Penguin Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
In the grand scheme of things, I don't know how timing is going to impact all of this. It's possible that beginning in about 18 months the issue may be moot. If true, is it worth expending significant resources to create code for putting BOINC projects on stand-alone GPUs? AMD-ATI to make a GPU on a CPU It appears Dr anderson has been contacted by ATI about a GPU Boinc client or some form of GPU useage within projects. Here's a fwd posting of an email from ATI to him that he posted on a mail list. I'm not sure how sensitive this is so I clipped out emails and full names. |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,513,695 RAC: 9,561 |
In the grand scheme of things, I don't know how timing is going to impact all of this. It's possible that beginning in about 18 months the issue may be moot. If true, is it worth expending significant resources to create code for putting BOINC projects on stand-alone GPUs? I think it will certianly still be worthwhile as the GPU that will be integrated will probably be relatively low-end as it's aimed at OEMs to replace current integrated graphics solutions. It'll also be at least a few years before that's a reality in any major numbers. I'm sure the integrated GPU features will be useful if programs can make use of these GPU functions without requiring coding changes or if only minor compiler changes are required, but for the mid to high end seperate GPUs will be around for a while yet ;) |
Mats Petersson Send message Joined: 29 Sep 05 Posts: 225 Credit: 951,788 RAC: 0 |
I don't know much about how the integrated GPU will actually work, but I strongly suspect that the difference will be small to the current generation of product, except that the signals from the CPU goes more directly to the GPU, rather than going via a number of extra steps - but to software, the GPU will still be a GPU, and not an integrated part of the processor per se. So, whilst the communication to the GPU itself will possibly be faster (should be!), it will still require some pretty special software. Another problem with applies to just about all of the GPU's is that they have a pretty long latency from the start of a calculation until the result comes out. That's because GPU's are designed with VERY long pipelines (and by long, I don't mean Pentium 4 long, but something like 3-10x that of a P4). The calculation capacity is phenominal, but it's based on doing the same thing over and over on the similar data. That would probably work for SOME of the Rosetta calculations, but in some other instances, it may not... -- Mats |
FluffyChicken Send message Joined: 1 Nov 05 Posts: 1260 Credit: 369,635 RAC: 0 |
I don't know much about how the integrated GPU will actually work, but I strongly suspect that the difference will be small to the current generation of product, except that the signals from the CPU goes more directly to the GPU, rather than going via a number of extra steps - but to software, the GPU will still be a GPU, and not an integrated part of the processor per se. That's how it is done at F@H, it only does some of the calculations and acts in reallity as a co-processor. That's the jist I get from it anyway. But if it can accelerate any part of it and graphics cards are only going to get faster (ok GPU type graphics cards, i.e. programmable one's) then it should benefit. Especially if they're going to help do the hard work :-) Though I don't have an ATI card, but an NVIDIA one, maybe they'll help translate to thiers. Though Microsoft has an 'Accelerator' technology in the research labs which I think is suppose to make this cross 'graphics' platform. [I think] Team mauisun.org |
Seventh Serenity Send message Joined: 30 Nov 05 Posts: 18 Credit: 87,811 RAC: 0 |
With all this news with GPU clients possibly coming to BOINC projects and already being out for Folding@Home, I wish ATI would improve their Linux fglrx driver to be as good as the driver for Windows and to also allow the GPU to run Folding@Home & BOINC projects (when done). Otherwise, I'll still have my doubts about ATI being a good company. "In the beginning the universe was created. This made a lot of people very angry and is widely considered as a bad move." - The Hitchhiker's Guide to the Galaxy |
River~~ Send message Joined: 15 Dec 05 Posts: 761 Credit: 285,578 RAC: 0 |
With all this news with GPU clients possibly coming to BOINC projects and already being out for Folding@Home, I wish ATI would improve their Linux fglrx driver to be as good as the driver for Windows and to also allow the GPU to run Folding@Home & BOINC projects (when done). AMD (who now own ATI) were very supportive to the linuxbios project, I understand. I think if some Linux volunteers start foing it then ATI/AMD will be forthcoming on the technical background. Whether that would go as far as writing code I'm not so sure. In contrast, Intel refused to release vital info to the linuxbios project and at one time there was a call to boycott intel as a result. btw linuxbios is a project to use linux cli as the bios - Google it for more info River~~ |
Tarx Send message Joined: 2 Apr 06 Posts: 42 Credit: 103,468 RAC: 0 |
Though I don't have an ATI card, but an NVIDIA one, maybe they'll help translate to thiers. Though Microsoft has an 'Accelerator' technology in the research labs which I think is suppose to make this cross 'graphics' platform. [I think] The current gen of NVIDIA cards (6xxx and 7xxx series) was originally using for the GPU folding client but were found to be much slower (X1600Pro possibly faster than the 7900GTX) than the ATI X1xxx series cards due to several technical reasons (e.g. cache coherence). As they didn't want to support/develop on two code paths (one for ATI and one for NVIDIA) when the ATI cards were so much faster, they are not planning to support/release the NVIDIA GPU client for the 6xxx/7xxx series (8xxx series however might be supported once it is properly examined). Now the current NVIDIA cards are still very fast, so depending on the requirements needed, they could well be a good fit for other code. |
Seventh Serenity Send message Joined: 30 Nov 05 Posts: 18 Credit: 87,811 RAC: 0 |
I think Folding@Home will be looking into the new nVidia core, the G80. I've heard it contains 128 shaders - the ATI X1950XTX has 48. "In the beginning the universe was created. This made a lot of people very angry and is widely considered as a bad move." - The Hitchhiker's Guide to the Galaxy |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
The G80 had 96 shaders for some models, and 128 shaders for the high end part in the german review I read. Hopefully, they've ironed out the other issues that caused the nVidia chips to perform so poorly over at FaH. With the competition to provide better crunching between ATI and nVidia, it'll hopefully make any portion of the Rosetta app that is ported to a GPU client just scream along.. if the Rosetta Team gets a GPU coding guru on staff. |
student_ Send message Joined: 24 Sep 05 Posts: 34 Credit: 4,737,409 RAC: 525 |
I wonder whether the Rosetta team would reallocate resources from developing the Rosetta game (where users guide proteins to a good conformation) to developing a GPU client. The latter certainly seems like a better investment- not only in terms of a 20-40x increase in FLOPS, but probably also a larger attraction to new users/retention of existing users. Perhaps the two projects' development wouldn't conflict. |
|MatMan| Send message Joined: 10 Oct 05 Posts: 3 Credit: 1,602,625 RAC: 0 |
That's wrong. The PS3 will use a graphics subsystem from nVidia. In general I'm not very satisfied with the current development of the GPU client at F@H because only very specific GPUs are supported. They use a proprietary (ATI specific) driver interface which could possibly be changed or incompatible with their next generation of GPUs. That is the reason why you can't run their GPU client on nVidia cards. -> They would have to develop a new code path for nVidia GPUs which is crap. So the official statement that nVidia cards would be to slow is only one side of the story. I mean they get some impressive performance numbers with a few hundred X1900 but this can't be the right way to bring GPGPU processing power to the masses. An application that benefits from GPUs should run on any SM3 GPU! I really do hope that BOINC and/ or rosetta won't use that ATI specific driver interface in future GPGPU efforts though it would lead to results in a shorter time period but not in the long run I think. Yes currently I own a nVidia graphics card but that might change when I buy a new one... |
Message boards :
Number crunching :
OT: A closer look at Folding@home on the GPU
©2024 University of Washington
https://www.bakerlab.org