GPU not getting work

Message boards : Number crunching : GPU not getting work

To post messages, you must log in.

AuthorMessage
snipaho

Send message
Joined: 10 Mar 06
Posts: 2
Credit: 2,560,436
RAC: 0
Message 62822 - Posted: 7 Aug 2009, 18:12:30 UTC

I have an nVidia 8800GT which is CUDA enabled, I am using BOINC 6.6.36 with the latest nVidia driver (190.38) on Windows 7 RC. In my Preferences I have told it to always use the GPU (not just when idle).

Under tasks, I only see 4 units being worked on. I have an Intel Core2 Quad Q6700. Shouldn't I see 5 units going? How do I make BOINC get work for my GPU? Or does Rosetta not support GPU work units?
ID: 62822 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
snipaho

Send message
Joined: 10 Mar 06
Posts: 2
Credit: 2,560,436
RAC: 0
Message 62823 - Posted: 7 Aug 2009, 18:22:27 UTC

Nevermind, I think I found the answer (that Rosetta doesn't do GPU work units).
ID: 62823 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5664
Credit: 5,711,666
RAC: 1,996
Message 62825 - Posted: 7 Aug 2009, 19:41:22 UTC - in response to Message 62823.  

Nevermind, I think I found the answer (that Rosetta doesn't do GPU work units).



That is the correct answer, Rosetta is a cpu only project.
I'm sure you have found some other projects that do use GPU processors.
ID: 62825 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Marko
Avatar

Send message
Joined: 6 Aug 09
Posts: 1
Credit: 8,702
RAC: 0
Message 62881 - Posted: 11 Aug 2009, 16:45:20 UTC

Will Rosetta@home ever support GPU computing?
ID: 62881 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5664
Credit: 5,711,666
RAC: 1,996
Message 62892 - Posted: 11 Aug 2009, 22:54:39 UTC - in response to Message 62881.  

Will Rosetta@home ever support GPU computing?


this thread had a brief discussion about GPU and RAH

https://boinc.bakerlab.org/rosetta/forum_thread.php?id=4266&nowrap=true#54700

ID: 62892 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5664
Credit: 5,711,666
RAC: 1,996
Message 62904 - Posted: 12 Aug 2009, 15:10:14 UTC - in response to Message 62903.  

Maybe it is a good idea to make a sticky thread on this subject. It might prevent the ever returning question.



Get an official answer from DEK or someone in technology about WHY GPU is not supported by RAH. Also if there are any future plans to support GPU.

Then put that answer here and in FAQ.
Problem solved.
ID: 62904 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,663,494
RAC: 723
Message 62916 - Posted: 13 Aug 2009, 7:43:20 UTC

>>> Problem solved.

Dream on Greg!
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 62916 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5664
Credit: 5,711,666
RAC: 1,996
Message 62928 - Posted: 14 Aug 2009, 10:30:02 UTC - in response to Message 62916.  

>>> Problem solved.

Dream on Greg!



in an ideal world, which this forum is not
ID: 62928 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile David E K
Volunteer moderator
Project administrator
Project developer
Project scientist

Send message
Joined: 1 Jul 05
Posts: 1018
Credit: 4,334,829
RAC: 0
Message 62931 - Posted: 14 Aug 2009, 16:59:13 UTC

I spent some time looking into using CUDA for nvidia gpus but it turns out that a pretty extensive rewrite would be required to really make good use of the gpus and CUDA does not support C++ yet. I even tried to port a couple routines but the memory bandwidth was just too much.
ID: 62931 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5664
Credit: 5,711,666
RAC: 1,996
Message 62934 - Posted: 14 Aug 2009, 20:44:06 UTC

Thanks for that info.
Now we know the real reason for RAH not going to GPU.
Maybe you could write something official along these lines and then we can point others to that message?
ID: 62934 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Orgil

Send message
Joined: 11 Dec 05
Posts: 82
Credit: 169,751
RAC: 0
Message 63444 - Posted: 24 Sep 2009, 16:22:12 UTC
Last modified: 24 Sep 2009, 16:27:26 UTC

Recently AMD released HD5850 gpu which is far more better than Nvidia on gpu computing and that use something OpenCL and another language for applications development could R@H use AMD options? something cheaper than nvidia and non-cuda yet would enable a lot's of computing resources.

The thing is gpu computing is inevitable advancement for dc projects yet only 2 out of whole boinc adopted it means really slow.
ID: 63444 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile robertmiles

Send message
Joined: 16 Jun 08
Posts: 1225
Credit: 13,862,138
RAC: 2,332
Message 63448 - Posted: 25 Sep 2009, 4:43:04 UTC
Last modified: 25 Sep 2009, 5:13:04 UTC

The best I can tell, Rosetta@home is using an algorithm in minirosetta which requires so much memory per processor that it won't get much if any benefit from the increased number of processors but decreased amount of memory per processor available in graphics cards - Nvidia type, ATI type, or the AMD type Orgil recommends. Some other BOINC projects that require much less memory per processor probably could, if the proper software was available to compile whatever computer language they are written in into whatever computer code at least one of those graphics cards requires - and they don't yet have the same choices of which types of computer codes will work.

The OpenCL idea for a computer code shared by all new graphics cards is rather new, and probably slow getting proper software written to handle it.

I've found GPUGRID to be a good source of workunits that handle many of the more recent Nvidia cards well, but not those with the fewest processors or too old a variety of Nvidia processors. Their algorithm appears to try to handle only the protein folding part of the things Rosetta@home can do, and works with less memory per processor.

I'd like a new Rosetta@home program that only tries to handle some of the functions the other Rosetta@home programs do, but uses at least one type of graphics card instead of running on the CPU only, but the compiler software support for writing such a program currently makes writing such programs slow enough that few BOINC projects have gone far beyond their initial efforts to try it.
ID: 63448 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1894
Credit: 8,767,498
RAC: 6,467
Message 63457 - Posted: 26 Sep 2009, 11:36:19 UTC - in response to Message 63444.  

Recently AMD released HD5850 gpu which is far more better than Nvidia on gpu computing and that use something OpenCL and another language for applications development.

The thing is gpu computing is inevitable advancement for dc projects yet only 2 out of whole boinc adopted it means really slow.


There are actually more than just 2 but I do agree that a few more could if they had the incentive or money.
Here are a few Boinc projects that do have gpu processing right now:
gpugrid, seti, milkyway, collatz, aqua and then there is always folding@home as a non Boinc project. There are more projects coming that promise gpu crunching but time will tell if they can follow thru or not.
ID: 63457 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Gen_X_Accord
Avatar

Send message
Joined: 5 Jun 06
Posts: 154
Credit: 279,018
RAC: 0
Message 63470 - Posted: 27 Sep 2009, 5:11:59 UTC

the Cuda website does say that Fortran and C++ will be supported in the future. However, if Rosetta needs "X" amount of memory per processor, and these new cards have something like 240 gpu's, but only 512mb-2gb total memory, I'm not sure how that translates to total memory per gpu, but I bet it is not much.
http://www.nvidia.com/object/cuda_what_is.html

And I know that the ATI people love to ask "What about us?" and their cards do overall more gpu computing than Nvidia, but from the few things I have read, the Firestream ATI drivers still suck and still crash many a computer. I'd rather have stability vs. more gflops of work done.
ID: 63470 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mwgiii

Send message
Joined: 29 Sep 05
Posts: 3
Credit: 90,006
RAC: 0
Message 63680 - Posted: 14 Oct 2009, 3:04:49 UTC

While you are waiting for a R@H CUDA app, you can always take a look at GPUGrid. (http://www.gpugrid.net/) They do all-atom biomolecular simulations using GPU only. That way you can keep R@H going full blast while letting your GPU also work.

Concerning ATI, there are a couple of BOINC projects which use ATI cards for GPU processing, but most projects seem to be waiting on OpenCL to add ATI support.
ID: 63680 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1894
Credit: 8,767,498
RAC: 6,467
Message 63684 - Posted: 14 Oct 2009, 9:17:30 UTC - in response to Message 63680.  

While you are waiting for a R@H CUDA app, you can always take a look at GPUGrid. (http://www.gpugrid.net/) They do all-atom biomolecular simulations using GPU only. That way you can keep R@H going full blast while letting your GPU also work.

Concerning ATI, there are a couple of BOINC projects which use ATI cards for GPU processing, but most projects seem to be waiting on OpenCL to add ATI support.


Collatz is one of those able to use ATI cards and it works quite well for most.
http://boinc.thesonntags.com/collatz/ They can also use a ton of cards not just the newest, some people have even been able to use the built-into the motherboard cards both Cuda and ATI versions. Not everyone has been able to do that but some have. One thing they have done recently is to make it so only 128 meg and above video cards will work, the 64 meg cards just didn't cut it for the Science anymore so they had to drop support for them. If your card works you can set Boinc to get only gpu units from them and cpu units from here and then you will be crunching with both your gpu and cpu at the same time. You MUST use a very high level version of Boinc though, they are using the test version 6.10.13 over there now.
ID: 63684 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : GPU not getting work



©2024 University of Washington
https://www.bakerlab.org