Crunching with GPU?

Message boards : Number crunching : Crunching with GPU?

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
HTH

Send message
Joined: 6 Mar 06
Posts: 15
Credit: 250,712
RAC: 0
Message 15936 - Posted: 11 May 2006, 16:43:56 UTC

Hi!

I was wondering, whether it is possible to use GPU (3D card) to crunch Rosetta@home packets, is it? At least, GPUs have very much computing power...
ID: 15936 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1829
Credit: 115,264,954
RAC: 47,087
Message 15939 - Posted: 11 May 2006, 17:08:35 UTC - in response to Message 15936.  

Hi!

I was wondering, whether it is possible to use GPU (3D card) to crunch Rosetta@home packets, is it? At least, GPUs have very much computing power...


This topic has been brought up a number of times -
here's a link to previous posts


HTH
Danny
ID: 15939 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Bob Guy

Send message
Joined: 7 Oct 05
Posts: 39
Credit: 24,895
RAC: 0
Message 16139 - Posted: 13 May 2006, 4:50:35 UTC - in response to Message 15936.  

Hi!

I was wondering, whether it is possible to use GPU (3D card) to crunch Rosetta@home packets, is it? At least, GPUs have very much computing power...

Just to complete the thought - the problem is that GPUs only have single precision floating point capability. The Boinc projects usually need at least double precision floating point. So it's a no-go for GPU crunching for now.
ID: 16139 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jimi@0wned.org.uk

Send message
Joined: 10 Mar 06
Posts: 29
Credit: 335,252
RAC: 0
Message 16187 - Posted: 13 May 2006, 18:37:00 UTC

Do the Ageia PhysX PPUs have this limitation?
ID: 16187 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Ethan
Volunteer moderator

Send message
Joined: 22 Aug 05
Posts: 286
Credit: 9,304,700
RAC: 0
Message 16195 - Posted: 13 May 2006, 20:47:34 UTC - in response to Message 16139.  
Last modified: 13 May 2006, 20:48:07 UTC

Hi!

I was wondering, whether it is possible to use GPU (3D card) to crunch Rosetta@home packets, is it? At least, GPUs have very much computing power...

Just to complete the thought - the problem is that GPUs only have single precision floating point capability. The Boinc projects usually need at least double precision floating point. So it's a no-go for GPU crunching for now.


Actually, the last time someone from the project replied to this question, Rosetta uses mostly single precision floats (note: I'm not in the project, just reposting what's been said).

ID: 16195 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Bob Guy

Send message
Joined: 7 Oct 05
Posts: 39
Credit: 24,895
RAC: 0
Message 16198 - Posted: 13 May 2006, 21:05:34 UTC

The physics boards now available do not have sufficient precision as far as I have been able to find out. That may change - but the current boards are designed for a specific type of physics solution and not for general mathematics processing. The boards are not like a FPU replacement no matter what 'advertising' you may have seen.

Regarding 'mostly single precision' - mostly is not close enough. Even one set of double precision calculations can completely remove any andvatage of using a GPU. In addition the only real advantage of a GPU is its unique ability to do signal processing which is NOT what Rosetta does.

The signal processing ability I refer to is that done by the Fourier transform algorithm, for example, and other classes of matrix transformation. A GPU is not suitable for sufficiently precise processing of this kind. The 'sloppy' processing of visual data is acceptable because a person will be unable to detect the few 'errors' just by looking at the visual results of a GPU's processing, but when used in a precise mathematical algorithm the errors are sufficient to make the results unuseable.
ID: 16198 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
suguruhirahara

Send message
Joined: 7 Mar 06
Posts: 27
Credit: 116,020
RAC: 0
Message 16226 - Posted: 14 May 2006, 5:39:40 UTC
Last modified: 14 May 2006, 5:40:30 UTC

http://setiathome.berkeley.edu/forum_thread.php?id=29562
Physics processing performance.

This seti forums thread can be useful. On this thread the similar thing is argued.
ID: 16226 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jimi@0wned.org.uk

Send message
Joined: 10 Mar 06
Posts: 29
Credit: 335,252
RAC: 0
Message 16250 - Posted: 14 May 2006, 14:28:33 UTC

Well, I've seen it mooted that it is a multi-core processor with a 4x4 matrix and inter-core communication speeds of 2Tb/s. Not that this helps <sigh>. Ageia don't say anything about the architecture.
ID: 16250 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 16283 - Posted: 15 May 2006, 1:22:41 UTC

We've had a link or two to the Folding@Home GPU client list where they list a few of the physics processors that are out there. After 2 years? they still haven't released a client to the general public for GPU use - and the collaboration with the add in physics processors hasn't resulted in a released client for the general public, either.
For some reason, it doesn't seem quite as easy to create a client for non-cpu use - as it should be..
ID: 16283 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Dirk Broer

Send message
Joined: 16 Nov 05
Posts: 22
Credit: 2,936,914
RAC: 5,351
Message 67791 - Posted: 23 Sep 2010, 7:53:01 UTC - in response to Message 16139.  

Hi!

I was wondering, whether it is possible to use GPU (3D card) to crunch Rosetta@home packets, is it? At least, GPUs have very much computing power...

Just to complete the thought - the problem is that GPUs only have single precision floating point capability. The Boinc projects usually need at least double precision floating point. So it's a no-go for GPU crunching for now.


Well, at some point in time you were right, but any nVidia card with compute capability 1.3 or higher will have double precision floating point capability,
meaning any card from GTX260 or better. It is also present on the ATI Radeon HD 5970, the HD 5800 Series (5830, 5850 and 5870), the HD 4800 Series, the Mobile Radeon HD 4800 Series, the HD 3800 Series, FirePro V8800, V8700 and V7800 Series and AMD FireStream 9200 Series GPUs. So the double-precision floating point argument falls flat.
ID: 67791 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Chilean
Avatar

Send message
Joined: 16 Oct 05
Posts: 711
Credit: 26,694,507
RAC: 0
Message 67796 - Posted: 24 Sep 2010, 1:08:58 UTC
Last modified: 24 Sep 2010, 1:20:37 UTC

Imagine a huge huge huge classroom full of 5th graders.
Now, Imagine another classroom with two math engineers.

The huge classroom is the GPU, and the two math engineers are CPUs (a dual core one).

Which one do you think is up to the task of solving a College level Calculus III problem?

The CPU is. =]

Sure, if you have a task in which you have to solve 3 trillion math problems (relatively easy ones) and manage it to distribute each problem to each student, they'll finish it way faster than the two math professor.

Best analogy I've come up so far as to why GPUs can't crunch Rosetta yet.
ID: 67796 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1829
Credit: 115,264,954
RAC: 47,087
Message 67801 - Posted: 24 Sep 2010, 7:49:07 UTC - in response to Message 67796.  

Imagine a huge huge huge classroom full of 5th graders.
Now, Imagine another classroom with two math engineers.

The huge classroom is the GPU, and the two math engineers are CPUs (a dual core one).

Which one do you think is up to the task of solving a College level Calculus III problem?

The CPU is. =]

Sure, if you have a task in which you have to solve 3 trillion math problems (relatively easy ones) and manage it to distribute each problem to each student, they'll finish it way faster than the two math professor.

Best analogy I've come up so far as to why GPUs can't crunch Rosetta yet.


and it's a good one! ;)
ID: 67801 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
JLConawayII

Send message
Joined: 21 Sep 10
Posts: 2
Credit: 1,009,812
RAC: 0
Message 67946 - Posted: 3 Oct 2010, 23:19:26 UTC - in response to Message 67796.  

Imagine a huge huge huge classroom full of 5th graders.
Now, Imagine another classroom with two math engineers.

The huge classroom is the GPU, and the two math engineers are CPUs (a dual core one).

Which one do you think is up to the task of solving a College level Calculus III problem?

The CPU is. =]

Sure, if you have a task in which you have to solve 3 trillion math problems (relatively easy ones) and manage it to distribute each problem to each student, they'll finish it way faster than the two math professor.

Best analogy I've come up so far as to why GPUs can't crunch Rosetta yet.


Then why do other molecular dynamics projects run on GPU?
ID: 67946 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Chilean
Avatar

Send message
Joined: 16 Oct 05
Posts: 711
Credit: 26,694,507
RAC: 0
Message 67950 - Posted: 4 Oct 2010, 2:52:03 UTC - in response to Message 67946.  
Last modified: 4 Oct 2010, 3:16:50 UTC

Imagine a huge huge huge classroom full of 5th graders.
Now, Imagine another classroom with two math engineers.

The huge classroom is the GPU, and the two math engineers are CPUs (a dual core one).

Which one do you think is up to the task of solving a College level Calculus III problem?

The CPU is. =]

Sure, if you have a task in which you have to solve 3 trillion math problems (relatively easy ones) and manage it to distribute each problem to each student, they'll finish it way faster than the two math professor.

Best analogy I've come up so far as to why GPUs can't crunch Rosetta yet.


Then why do other molecular dynamics projects run on GPU?


I think it's because even though they take lots and lots of operations, they aren't complex. You know that "this" goes this way under X circumstances.

Rosetta is aimed at developing a software that'd predict protein based on it's amino acid sequence. If you were to use Rosetta to predict a certain protein, then you could use GPUs. But to "develop" the software, you need a more "off-road" car than a F1 car.

Correct me if I'm wrong.
ID: 67950 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Rabinovitch
Avatar

Send message
Joined: 28 Apr 07
Posts: 28
Credit: 5,439,728
RAC: 0
Message 67952 - Posted: 4 Oct 2010, 5:29:09 UTC - in response to Message 67946.  

Then why do other molecular dynamics projects run on GPU?

Yep, Gpugrid.net for main example.
ID: 67952 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 67963 - Posted: 5 Oct 2010, 5:37:36 UTC

What Chilean said...
Defeat Censorship! Wikileaks needs OUR help! Learn how you can help (d/l 'insurance' file), by clicking here. "Whoever would overthrow the liberty of a nation must begin by subduing the freeness of speech" B. Franklin
ID: 67963 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
borg

Send message
Joined: 4 Dec 07
Posts: 3
Credit: 142,556
RAC: 0
Message 67969 - Posted: 5 Oct 2010, 11:21:53 UTC

Chilean, I believe you are wrong.

The only reason GPUs can't be used with Rosetta is that nobody yet took the effort to write the code. GPUs have sufficient instruction set, faster RAM, and greater computing power than CPUs. Complaints about their single precision architecture are misplaced, as doubles can be processed on any CUDA capable GPU, only it takes more cycles. The "classroom" and "off-road" analogies are too empiric, CPU is not smart, it only does what you tell it to. There is no fixed connection between the complexity of Rosetta and x86/x64 architecture - any task can be programmed to run on any kind of processor if there is enough memory and time.
ID: 67969 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1894
Credit: 8,739,190
RAC: 12,059
Message 67971 - Posted: 5 Oct 2010, 12:04:29 UTC - in response to Message 67969.  

Chilean, I believe you are wrong.

The only reason GPUs can't be used with Rosetta is that nobody yet took the effort to write the code. GPUs have sufficient instruction set, faster RAM, and greater computing power than CPUs. Complaints about their single precision architecture are misplaced, as doubles can be processed on any CUDA capable GPU, only it takes more cycles. The "classroom" and "off-road" analogies are too empiric, CPU is not smart, it only does what you tell it to. There is no fixed connection between the complexity of Rosetta and x86/x64 architecture - any task can be programmed to run on any kind of processor if there is enough memory and time.


You are correct but the memory structure and the way a gpu works is totally different than a cpu. A gpu can do one thing alot of times very well, but it is limited to how much different info you can put into its limited memory, after that alot of efficiency is lost because you are moving things in and out of memory causing tremendous slowdown and losing all the gpu's great advantages. Collatz's Admin, Slicker, has explained this several times and why his project works so well using a gpu as opposed to a cpu. They have tweaked the project to load as much has possible into the gpu's memory but not so much they start swapping it out. Rosetta's problem is more memory intensive and not conducive to fitting within a gpu's memory limits. It is also constantly changing, whih is another gpu's problem. Gpu's like the same set of parameters to be used over and over and over while cpu's are very flexible. Could it be done, maybe, but will it be done anytime soon, probably not. What they have works for them and their customers, so putting a ton of money into doing things another way is currently not cost effective. And since it is a working lab that needs to make money that is a consideration. Now if they were to get a donation of a few million bucks targeted for making it work on a gpu, yes they could put forth some resources to take a closer look. Until then those of us with gpu's must just crunch elsewhere.
ID: 67971 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 67973 - Posted: 5 Oct 2010, 15:40:54 UTC

Speaking as an informed volunteer here, recall that I am not a part of the Rosetta Project Team, I've taken several CUDA webinars. The main challenge with getting a GPU application written, and performing better then a CPU application, is memory management. You have to make the instructions that you wish to perform available to the GPU, it has to get them via the CPU and it's disk and memory accesses. So if you have a tiny program that processes each element in an array, it runs like greased lightening. Massively parallel. But if you need a suite of routines available to process different elements differently, then the GPU ends up spending most of it's time waiting for the required elements of the program to be brought from the CPU side. When they arrive, an existing program element must be removed due to insufficient GPU memory, and thus the cycle of waiting before you can get more work done continues.

The Rosetta executable is many MB just for the download. The runtime is 100s of MB of program and data being processed. This is most likely the crux of the reasons that the study of the applicability of GPU to Rosetta resulted in not developing a GPU application.

In short, you can't believe all of the hype about new products. The comments are specifically chosen to make the new product sound dramatically new and different and easy and beneficial, but it doesn't mean your clothes will get any whiter with the new detergent.
Rosetta Moderator: Mod.Sense
ID: 67973 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
borg

Send message
Joined: 4 Dec 07
Posts: 3
Credit: 142,556
RAC: 0
Message 67980 - Posted: 6 Oct 2010, 17:18:59 UTC

OK, that makes more sense.
ID: 67980 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
1 · 2 · Next

Message boards : Number crunching : Crunching with GPU?



©2024 University of Washington
https://www.bakerlab.org