new version of BOINC + CUDA support

Message boards : Number crunching : new version of BOINC + CUDA support

To post messages, you must log in.

Previous · 1 · 2 · 3 · Next

AuthorMessage
Orgil

Send message
Joined: 11 Dec 05
Posts: 82
Credit: 169,751
RAC: 0
Message 59393 - Posted: 6 Feb 2009, 15:43:12 UTC
Last modified: 6 Feb 2009, 15:44:47 UTC

Looks like someone out there making more forward steps in GPU computation feild while in this corner we are sleeping:

http://www.tgdaily.com/content/view/41352/140/
ID: 59393 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 59397 - Posted: 6 Feb 2009, 19:15:50 UTC

There are not applications yet for the ATI series of cards.

All current projects are right now targeting the Nvidia series. Work on an ATI application is being done by a volunteer at Milky Way, but it is alpha state an only runs on win 64 ...

We are in early days guys and girls ... it will be a little while before we really get going ...
ID: 59397 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1832
Credit: 119,860,059
RAC: 4,566
Message 59399 - Posted: 6 Feb 2009, 19:37:33 UTC

as Paul says, it's early days. CUDA might not last very long - OpenCL and possibly whatever Larrabee runs could potentially be much better investments than CUDA...

The inquirer suggests that the next gen consoles will be powered by Intel Larrabee (Sony) and ATI (xbox), and probably ATI for the Wii too, so maybe Nvidia isn't the way to go...
ID: 59399 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 59400 - Posted: 6 Feb 2009, 19:54:40 UTC - in response to Message 59399.  

as Paul says, it's early days. CUDA might not last very long - OpenCL and possibly whatever Larrabee runs could potentially be much better investments than CUDA...

The inquirer suggests that the next gen consoles will be powered by Intel Larrabee (Sony) and ATI (xbox), and probably ATI for the Wii too, so maybe Nvidia isn't the way to go...


True,

But at the moment it is really the only game in town for BOINC. Folding has apps that will run on ATI and there are quite a few that run both at the same time ...

Open Something or other is supposed to replace CUDA and the older ATI API so that there is a common basis and this should increase the penetration and decrease the efforts required.

I pretty well filled out my dance card with Nvidia at the moment and likely will not invest too much more till I can see the way the wind blows ... though if I do build another system this summer I will likely get another pair of GTX 295s if the M Way application is not yet running ...

The use of the cell processors is still not well implemented as yet ... SaH is looking to do that too ... The only bad news there is we are still requiring the installation of Linux to run where we might be better off if we could make it more like a game cartridge ... broader audience ...
ID: 59400 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 59401 - Posted: 6 Feb 2009, 21:13:14 UTC - in response to Message 59399.  
Last modified: 6 Feb 2009, 21:21:24 UTC

For those that want to read the article dcdc was referring to: Intel will design PlayStation 4 GPU


With a couple of deliverables satisfied, the PS4 GPU belongs to Intel. No word if this is going to be the entire architecture, CPU as well, or not. That, from what we are told, is not final yet.

Next next gen consoles also will likely have a CPU which we know nothing about. That said, given that Intel will basically be designing large swathes of the PS4, it would seem to be leaning toward x86. Given that, and MS's inclination toward x86 software, that would seem a natural path for them to follow as well, if for no other reason than to protect the living room from the ARM scourge running Linux.

With that, we have the first hard info on the next next generation gaming boxes. Given silicon time scales, three years out means work starts about now. ATI and Intel have work to do, but it should be very interesting when all things are done, a radical shift towards the PC of a magnitude not seen since the Playstation replaced cartridges with CDs.



as Paul says, it's early days. CUDA might not last very long - OpenCL and possibly whatever Larrabee runs could potentially be much better investments than CUDA...

The inquirer suggests that the next gen consoles will be powered by Intel Larrabee (Sony) and ATI (xbox), and probably ATI for the Wii too, so maybe Nvidia isn't the way to go...
ID: 59401 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
AMD_is_logical

Send message
Joined: 20 Dec 05
Posts: 299
Credit: 31,460,681
RAC: 0
Message 59403 - Posted: 6 Feb 2009, 22:02:34 UTC - in response to Message 59401.  

For those that want to read the article dcdc was referring to: Intel will design PlayStation 4 GPU


Sony denies this rumor http://www.t3.com/news/sony-denies-ps4-intel-gpu-rumour?=38046&cid=OTC-RSS&attr=T3-Main-RSS
ID: 59403 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 59404 - Posted: 6 Feb 2009, 22:48:40 UTC - in response to Message 59403.  
Last modified: 6 Feb 2009, 22:51:39 UTC

Interesting.

Given the amount of times, imho, that the Inquirer has been correct (many), and the amount of times that Sony "denials" have been correct (few), my money says to believe the Inquirer over Sony.

Ymmv.

What is the typical life-cycle for a generation of a gaming console? Wouldn't ~2012 be about the right time for the next generation? If so, wouldn't they have to begin planning now?
ID: 59404 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,217,610
RAC: 1,154
Message 59413 - Posted: 7 Feb 2009, 12:31:26 UTC - in response to Message 59393.  

Looks like someone out there making more forward steps in GPU computation feild while in this corner we are sleeping:

http://www.tgdaily.com/content/view/41352/140/


The problem, right now, is that NVidia uses CUDA while 2 other places are developing a different "standard". Microsoft is one of those, Open MM is the other. I got my Maximum PC magazine yesterday and in it they call for NVidia to dump CUDA and go with one of the other "standards". NVidia wrote CUDA and is the only one that can use it, making projects, like Einstein either code too different standards or wait for Nvidia to come around to one of the other standards. I am guessing they are going to wait and see, that way even more video cards would be supported instead of JUST NVidia ones.
ID: 59413 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,217,610
RAC: 1,154
Message 59414 - Posted: 7 Feb 2009, 12:33:43 UTC - in response to Message 59397.  
Last modified: 7 Feb 2009, 12:34:57 UTC

duplicate
ID: 59414 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Orgil

Send message
Joined: 11 Dec 05
Posts: 82
Credit: 169,751
RAC: 0
Message 59424 - Posted: 7 Feb 2009, 18:36:00 UTC - in response to Message 59413.  
Last modified: 7 Feb 2009, 18:36:23 UTC



The problem, right now, is that NVidia uses CUDA while 2 other places are developing a different "standard". Microsoft is one of those, Open MM is the other. I got my Maximum PC magazine yesterday and in it they call for NVidia to dump CUDA and go with one of the other "standards". NVidia wrote CUDA and is the only one that can use it, making projects, like Einstein either code too different standards or wait for Nvidia to come around to one of the other standards. I am guessing they are going to wait and see, that way even more video cards would be supported instead of JUST NVidia ones.


But as I understand Cuda is the pioneering technology that gives an option for cpu tasks plus it now achieved Tesla thing that exceeding 1TF computing power with simple card like gadget. So supposedly Cuda has possible future rather than ati and intel. I guess most boinc dc projects are now doing their math research for possible shift to Cuda.
ID: 59424 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 59426 - Posted: 7 Feb 2009, 19:43:57 UTC - in response to Message 59413.  

Looks like someone out there making more forward steps in GPU computation feild while in this corner we are sleeping:

http://www.tgdaily.com/content/view/41352/140/


The problem, right now, is that NVidia uses CUDA while 2 other places are developing a different "standard". Microsoft is one of those, Open MM is the other. I got my Maximum PC magazine yesterday and in it they call for NVidia to dump CUDA and go with one of the other "standards". NVidia wrote CUDA and is the only one that can use it, making projects, like Einstein either code too different standards or wait for Nvidia to come around to one of the other standards. I am guessing they are going to wait and see, that way even more video cards would be supported instead of JUST NVidia ones.



Yes, well OpenCL is coming and all parties look to be supporting it. as this quote (from taht article) says:

AMD has decided to support OpenCL (and DirectX 11) instead of the now deprecated Close to Metal in its Stream framework.[5][6] RapidMind announced their adoption of OpenCL underneath their development platform, in order to support GPUs from multiple vendors with one interface.[7] Nvidia announced on December 9, 2008 to add full support for the OpenCL 1.0 specification to its GPU Computing Toolkit.[8]


At least for the next couple years I would expect that the way to get the best performance will still to use the vendor specific extensions, but, with time maybe, this time, we shall see something different...

At any rate, for BOINC, at this moment, Nvidia is the only game in town ... 6 months from now, who knows? This generation of GPUs says the Nvida has the edge in single precision while ATI has the double precision ... at least that is what I have been told ... :)
ID: 59426 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,217,610
RAC: 1,154
Message 59451 - Posted: 8 Feb 2009, 12:27:33 UTC - in response to Message 59424.  



The problem, right now, is that NVidia uses CUDA while 2 other places are developing a different "standard". Microsoft is one of those, Open MM is the other. I got my Maximum PC magazine yesterday and in it they call for NVidia to dump CUDA and go with one of the other "standards". NVidia wrote CUDA and is the only one that can use it, making projects, like Einstein either code too different standards or wait for Nvidia to come around to one of the other standards. I am guessing they are going to wait and see, that way even more video cards would be supported instead of JUST NVidia ones.


But as I understand Cuda is the pioneering technology that gives an option for cpu tasks plus it now achieved Tesla thing that exceeding 1TF computing power with simple card like gadget. So supposedly Cuda has possible future rather than ati and intel. I guess most boinc dc projects are now doing their math research for possible shift to Cuda.


You can crunch with ATI cards too, but only at Folding@home and ONLY with their non Boinc application.
ID: 59451 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 59475 - Posted: 9 Feb 2009, 3:20:43 UTC - in response to Message 59403.  
Last modified: 9 Feb 2009, 3:30:42 UTC

Re the rumors and denials (discussed below) of an Intel Larrabee being included in the eventual Sony PS4, there also seem to be rumors of it being potentially paired with a Cell2 cpu, whatever that is/will be (FUD alert = 2 x ppe & 32 spe's).

But if a potential Cell2 is anything like a PowerXCell 8i, and is mated with a Larrabee that turns out to deliver everything Intel has promised, it certainly will be interesting times indeed...


In 2008, IBM announced a revised variant of the Cell called the PowerXCell 8i, which is available in QS22 Blade Servers from IBM. The PowerXCell is manufactured on a 65 nm process, and adds support for up to 32GB of slotted DDR2 memory, as well as dramatically improving double-precision floating-point performance on the SPEs from a peak of about 14 GFLOPS to 102 GFLOPS total for 8 SPEs. The IBM Roadrunner supercomputer, currently the world's fastest, consists of exactly 12,240 PowerXCell 8i processors, along with 6,562 AMD Opteron processors.
ID: 59475 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Tino

Send message
Joined: 2 May 07
Posts: 1
Credit: 3,877,033
RAC: 0
Message 59731 - Posted: 22 Feb 2009, 17:06:00 UTC - in response to Message 59475.  

Re the rumors and denials (discussed below) of an Intel Larrabee being included in the eventual Sony PS4, there also seem to be rumors of it being potentially paired with a Cell2 cpu, whatever that is/will be (FUD alert = 2 x ppe & 32 spe's).

But if a potential Cell2 is anything like a PowerXCell 8i, and is mated with a Larrabee that turns out to deliver everything Intel has promised, it certainly will be interesting times indeed...


In 2008, IBM announced a revised variant of the Cell called the PowerXCell 8i, which is available in QS22 Blade Servers from IBM. The PowerXCell is manufactured on a 65 nm process, and adds support for up to 32GB of slotted DDR2 memory, as well as dramatically improving double-precision floating-point performance on the SPEs from a peak of about 14 GFLOPS to 102 GFLOPS total for 8 SPEs. The IBM Roadrunner supercomputer, currently the world's fastest, consists of exactly 12,240 PowerXCell 8i processors, along with 6,562 AMD Opteron processors.



hi to all
i've an Nvidia Cuda video card installed

how i can see when gpu is working?
ID: 59731 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 59732 - Posted: 22 Feb 2009, 17:17:39 UTC

hey Tino,

gpu does NOT work with Rosetta@Home.

there are some (very few) Boinc projects that will work with nVidia Cuda gpu: gpugrid.net comes to mind.

or else you can join Folding@Home...
ID: 59732 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 59737 - Posted: 22 Feb 2009, 21:22:49 UTC - in response to Message 59731.  

hi to all
i've an Nvidia Cuda video card installed

how i can see when gpu is working?


When you start BOINC Manager in the messages tab you should see a note that the BOINC Manager has detected a GPU ...

There are only 3 projects in BOINC that use GPUs at this time: SaH, SaH Beta, and GPU Grid ... For ATI there is an ALPHA Test application for the 38xx and 48xx cards.

Not all video cards are acceptable for all projects that use them... typically only the higher end cards work ... see the project boards for details. As time wends on we should see more projects come on-line with GPU applications and possibly some of the lower end cards will be usable for other projects. As an example cards that are not usable on GPU Grid can be used on SaH ...

Last notes, you have to have later video drivers installed to use the GPU, and a later version of BOINC ... for windows 32 the "best" known driver right now is 181.22 and the best BOINC Manager version is 6.5.0 ...
ID: 59737 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
todd

Send message
Joined: 12 Oct 06
Posts: 2
Credit: 1,215,594
RAC: 0
Message 63780 - Posted: 23 Oct 2009, 1:42:46 UTC - in response to Message 57950.  

I just noticed that there is a new version of BOINC (6.4.5) and it is now advertising that these applications can support CUDA now if they so desire to. I for one would love to see Rosetta support CUDA.



I second that vote, Cuda has been out for quite a while and brings a tremendous amount of computing power to the table. I don't understand what the delay is, it's not rocket science. As a developer, it's not that much of a challenge to port between hardware and different operating systems. I've done it many times and lived to tell about it.

So tell us rosetta staff, what's the Hold up?

ID: 63780 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile mxplm

Send message
Joined: 12 Sep 09
Posts: 4
Credit: 234,332
RAC: 0
Message 63788 - Posted: 23 Oct 2009, 11:58:06 UTC

As a developer, it's not that much of a challenge to port between hardware and different operating systems.


I think this is not just porting an existing application to a new architecture, like different types of CPUs. To use a video card effectively, one has to rewrite a great deal of the software and use a completely different style of writing source code.
ID: 63788 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1832
Credit: 119,860,059
RAC: 4,566
Message 63789 - Posted: 23 Oct 2009, 12:06:27 UTC - in response to Message 63780.  

I just noticed that there is a new version of BOINC (6.4.5) and it is now advertising that these applications can support CUDA now if they so desire to. I for one would love to see Rosetta support CUDA.



I second that vote, Cuda has been out for quite a while and brings a tremendous amount of computing power to the table. I don't understand what the delay is, it's not rocket science. As a developer, it's not that much of a challenge to port between hardware and different operating systems. I've done it many times and lived to tell about it.

So tell us rosetta staff, what's the Hold up?


I don't believe CUDA supports C++ yet for a start.

See: https://boinc.bakerlab.org/rosetta/forum_thread.php?id=4100&nowrap=true#52901
and (especially David E K's response half way down):
https://boinc.bakerlab.org/rosetta/forum_thread.php?id=5023&nowrap=true#62931

It's not a case of porting/recompiling - it's a case of rewriting and even minirosetta is many hundreds of thousands of lines of code that would all need testing and validating. Assuming GPGPU is suitable and the latency between CPU and GPU doesn't have a negative effect on the processing, I think there's a reasonable probability of it happening at some point seeing as GPGPU is maturing reasonably quickly, but it's never going to be a quick process for R@H.

It's not rocket science but it's probably as complicated ;)
ID: 63789 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile robertmiles

Send message
Joined: 16 Jun 08
Posts: 1234
Credit: 14,338,560
RAC: 1,227
Message 63814 - Posted: 25 Oct 2009, 6:07:16 UTC
Last modified: 25 Oct 2009, 6:43:37 UTC

SOME Nvidia cards are about to start supporting code written in C++, OpenCL, OpenGL, or a few other computer languages, but Nvidia hasn't made it clear yet whether that includes any made with the chips they have been selling in the past.


NVIDIA’s Next Generation CUDA(TM) Compute Architecture: Fermi

http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIAFermiArchitectureWhitepaper.pdf


New FERMI GPU, 4x more cores, more memory

https://boinc.bakerlab.org/rosetta/forum_thread.php?id=5094


Nvidia GT300

http://www.gpugrid.net/forum_thread.php?id=1406


The main problem with ATI cards at present is that ATI is far behind at providing the proper compilers to convert software written to run on CPUs so that it can run on an ATI GPU instead, without totally rewriting it in a different computer language.


Another problem with converting minirosetta to run on any type of GPU card is the amount of memory it requires - to get a full speedup it would require the GPU card to have approximately 512 MB times the number of GPU cores for its total graphics memory (a few hundred times 512 MB for high-end cards). It could limit the number of GPU cores it uses to the number it can find GPU memory for, though, and at least get it to run on a GPU card, sometimes with somewhat of a speedup.
ID: 63814 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Previous · 1 · 2 · 3 · Next

Message boards : Number crunching : new version of BOINC + CUDA support



©2024 University of Washington
https://www.bakerlab.org