OT: A closer look at Folding@home on the GPU

Message boards : Number crunching : OT: A closer look at Folding@home on the GPU

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 30068 - Posted: 26 Oct 2006, 20:42:04 UTC

If you've been following the F@H GPU saga for the past few years, you'll remember that it started with nVidia cards - which were considered the most powerful at the time. They finally abandoned nVidia development and went ATI due to cards like the x1800 being setup with enough shader engines with 32 bit precision to make the job relatively easy. Mr. Houston also mentioned that nVidia is using a technique to hide latency that also caused problems for the GPU code.
When nVidia switches approaches - then we'll likely see the latest nVidia cards being supported. If Intel can switch from Speed-at-all-costs to multiple cores at reasonable speeds with higher instructions per clock - following AMD's lead, then there's hope for nVidia's following ATI's lead.

And yes, I, too, am an nVidiot, not a fanATIc. :)
ID: 30068 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 30075 - Posted: 26 Oct 2006, 22:00:34 UTC - in response to Message 30068.  
Last modified: 26 Oct 2006, 22:03:43 UTC

This may be an interesting read: AMD, Intel are come-back kids with X86 vectorisation

If you've been following the F@H GPU saga for the past few years, you'll remember that it started with nVidia cards - which were considered the most powerful at the time. They finally abandoned nVidia development and went ATI due to cards like the x1800 being setup with enough shader engines with 32 bit precision to make the job relatively easy. Mr. Houston also mentioned that nVidia is using a technique to hide latency that also caused problems for the GPU code.
When nVidia switches approaches - then we'll likely see the latest nVidia cards being supported. If Intel can switch from Speed-at-all-costs to multiple cores at reasonable speeds with higher instructions per clock - following AMD's lead, then there's hope for nVidia's following ATI's lead.

And yes, I, too, am an nVidiot, not a fanATIc. :)

ID: 30075 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
darkclown

Send message
Joined: 28 Sep 06
Posts: 3
Credit: 222,345
RAC: 0
Message 30087 - Posted: 27 Oct 2006, 5:15:40 UTC

The next gen of nVidia cards, which will be DirectX 10 cards, should be much better in terms of performance. From what I understand, Microsoft has been fairly heavy handed in dictating tight ranges for acceptable results on various functions in order to be dx10 compliant/certified. The 8800, I believe, will have 128 shaders.
ID: 30087 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 30088 - Posted: 27 Oct 2006, 5:24:44 UTC

One model of the new G80s was listed as having 96 shaders, and the top end was listed as having 128 shaders in the german review I read.
ID: 30088 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 30095 - Posted: 27 Oct 2006, 10:46:38 UTC - in response to Message 30057.  


The other side of this is most the new gen consoles use ATI as their graphics processor, so you maybe able to use it to benefit that.

That's wrong. The PS3 will use a graphics subsystem from nVidia.



I said most, not all :-)
Team mauisun.org
ID: 30095 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 30103 - Posted: 27 Oct 2006, 12:13:26 UTC - in response to Message 30075.  

"In summary, whether for gaming physics, black hole research or financial simulations, a combination of multi-core and vector processing will bring PCs close to the teraflop performance, and most probably cross the teraflop peak speed barrier by 2010 - whether Intel decides to introduce the feature earlier in Nehalem, or wait till Gesher in that year. In a sense, both CPU and GPU approaches can be combined anyway, as they don't exclude each other. At the end, that CPU vector unit could become a core of its on-chip high-end GPU too, couldn't it?"

This may be an interesting read: AMD, Intel are come-back kids with X86 vectorisation

ID: 30103 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Tarx

Send message
Joined: 2 Apr 06
Posts: 42
Credit: 103,468
RAC: 0
Message 30543 - Posted: 3 Nov 2006, 3:54:59 UTC

By the way, I just noticed that folding@home now supports multiple ATI graphic cards crunching away on the same system.
ID: 30543 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile MintabiePete
Avatar

Send message
Joined: 5 Nov 05
Posts: 30
Credit: 418,959
RAC: 0
Message 31287 - Posted: 17 Nov 2006, 7:42:59 UTC - in response to Message 30068.  

If you've been following the F@H GPU saga for the past few years, you'll remember that it started with nVidia cards - which were considered the most powerful at the time. They finally abandoned nVidia development and went ATI due to cards like the x1800 being setup with enough shader engines with 32 bit precision to make the job relatively easy. Mr. Houston also mentioned that nVidia is using a technique to hide latency that also caused problems for the GPU code.
When nVidia switches approaches - then we'll likely see the latest nVidia cards being supported. If Intel can switch from Speed-at-all-costs to multiple cores at reasonable speeds with higher instructions per clock - following AMD's lead, then there's hope for nVidia's following ATI's lead.

And yes, I, too, am an nVidiot, not a fanATIc. :)

Good one mate , I like your enthusiasm , guess I'm a fanATIc and thats fantAsTIc :)

ID: 31287 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 31289 - Posted: 17 Nov 2006, 8:41:06 UTC - in response to Message 31287.  

If you've been following the F@H GPU saga for the past few years, you'll remember that it started with nVidia cards - which were considered the most powerful at the time. They finally abandoned nVidia development and went ATI due to cards like the x1800 being setup with enough shader engines with 32 bit precision to make the job relatively easy. Mr. Houston also mentioned that nVidia is using a technique to hide latency that also caused problems for the GPU code.
When nVidia switches approaches - then we'll likely see the latest nVidia cards being supported. If Intel can switch from Speed-at-all-costs to multiple cores at reasonable speeds with higher instructions per clock - following AMD's lead, then there's hope for nVidia's following ATI's lead.

And yes, I, too, am an nVidiot, not a fanATIc. :)

Good one mate , I like your enthusiasm , guess I'm a fanATIc and thats fantAsTIc :)


Now ATI are called AMD (or is the 'new AMD' ;-) http://ati.amd.com/ you need to get them both in the name.
Team mauisun.org
ID: 31289 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile River~~
Avatar

Send message
Joined: 15 Dec 05
Posts: 761
Credit: 285,578
RAC: 0
Message 31316 - Posted: 17 Nov 2006, 17:27:45 UTC - in response to Message 31289.  


Good one mate , I like your enthusiasm , guess I'm a fanATIc and thats fantAsTIc :)


Now ATI are called AMD (or is the 'new AMD' ;-) http://ati.amd.com/ you need to get them both in the name.


like

guess I'm a fanATIc AMD thats fantAsTIc

HTH ;) R~~
ID: 31316 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Christoph Jansen
Avatar

Send message
Joined: 6 Jun 06
Posts: 248
Credit: 267,153
RAC: 0
Message 31758 - Posted: 28 Nov 2006, 11:30:05 UTC

Hi all,

just a thing that came to my mind when browsing for current graphics cards' prices:

It looks very much like the focus of crunching on GPUs concentrates on ATI based solutions, which is also picked up in this thread. How does that fit with the problems that quite some users run into when using ATI cards with BOINC screensavers and graphical frontend in general on several projects?

There might be a train that pulls out of the station and leaves BOINCers behind, at least to some degree. Is the problem rare enough to just neglect it or are the people reporting these problems just the tip of an iceberg, most of which is concealed by just terminating BOINC without further comment?

Just intended as a cautious question, as I do not have the slightest notion about problems associated with trying to reconfigure BOINC to cope with that and the value of it in comparison to the effort.

[Note]Of course I do not think that is a generic train of thought only realised by me. I am sure the programmers themselves are quite aware of all that. It is just that it has not been mentioned in this thread before and I wanted to throw it in to the arena.
ID: 31758 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Seventh Serenity

Send message
Joined: 30 Nov 05
Posts: 18
Credit: 87,811
RAC: 0
Message 31938 - Posted: 2 Dec 2006, 13:25:31 UTC

I've just upgraded my GPU from an ATI X850XT to an nVidia 7800 GTX - mostly because of SM3, HDR and the better driver support. I just don't like leaving it idle with it's 24-shaders. I seriously wish there was a project that could make use of it.
"In the beginning the universe was created. This made a lot of people very angry and is widely considered as a bad move." - The Hitchhiker's Guide to the Galaxy
ID: 31938 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Tarx

Send message
Joined: 2 Apr 06
Posts: 42
Credit: 103,468
RAC: 0
Message 31974 - Posted: 2 Dec 2006, 22:57:57 UTC - in response to Message 31938.  

Folding@Home now has added the X1600/X1650 and the X1800 series graphic cards to the supported (already had X1900/X1950). NVIDIA 8000 series is not yet sure, but most expect it will be supported in the future. Existing NVIDIA 7000 series (and lower) and ATI X850 series and lower, will never be supported. The ATI X850 and lower is due to lack of capability. NVIDIA 7000 series and earlier is due to serious bottlenecks for that type of computation even though the alpha version did run on NVIDIA cards (but was just way too slow) (and no, it didn't have much to do with the number of shaders - it was issues like branching, cache coherency, etc.)
ID: 31974 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paydirt
Avatar

Send message
Joined: 10 Aug 06
Posts: 127
Credit: 960,607
RAC: 0
Message 33210 - Posted: 22 Dec 2006, 21:15:10 UTC

For F@H, CPU crunching and GPU crunching is fairly different. Yeah, GPUs are much more powerful, but they cannot yet crunch sets that are as complex as the CPUs do. So the CPUs can crunch a wider variety of things while the GPUs are doing more grunt work. This is how things stand presently.

I think that BOINC (David Anderson of Berkeley) realize that PS3, XBOX 360, and GPUs offer a whole new level of crunching power (per dollar of equipment) and they will get BOINC code or whatnot to utilize this. It's also possible that the Gates Foundation or Paul Allen might support a programming project if BOINC cannot foot the bill or programmers are unwilling to donate their efforts...?
ID: 33210 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Previous · 1 · 2

Message boards : Number crunching : OT: A closer look at Folding@home on the GPU



©2024 University of Washington
https://www.bakerlab.org