A SINGLE Graphics Card = 1 teraFLOPS

Message boards : Number crunching : A SINGLE Graphics Card = 1 teraFLOPS

To post messages, you must log in.

Previous · 1 · 2 · 3 · Next

AuthorMessage
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,214,047
RAC: 1,450
Message 59415 - Posted: 7 Feb 2009, 12:41:46 UTC - in response to Message 59396.  

I tried doing GPU grid and it stopped my Boinc crunching, with one of the cpus. I am using verion 6.4.5 and i know there are newer version that address some of these issues, but I really want to crunch for Rosetta with my GPU. Einstein would be okay too but currently neither one supports video card crunching. I am back to what works for me, at least for now.


BOINC Manager 6.5.0 fixes that problem that 6.4.5 had ...


I guess I am in for another test day! Thanks
ID: 59415 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 59425 - Posted: 7 Feb 2009, 19:37:41 UTC - in response to Message 59415.  

I tried doing GPU grid and it stopped my Boinc crunching, with one of the cpus. I am using verion 6.4.5 and i know there are newer version that address some of these issues, but I really want to crunch for Rosetta with my GPU. Einstein would be okay too but currently neither one supports video card crunching. I am back to what works for me, at least for now.


BOINC Manager 6.5.0 fixes that problem that 6.4.5 had ...


I guess I am in for another test day! Thanks


Me too ...

I ran out of disk space and the drive started to develop directory errors which the tools won't fix because there is not enough space ... so I bought a set of 6 1.5 TB drives ... two of them are failing SMART ... and with only 4 I cannot rebuild my RAID 5 array as I used one for the OS ... so, do I re-install the OS for the 5th time (I installed it on one of the failing drives first, sigh) and go through all the work and time to recover all my settings (Again) so I can use the 4 drives that seem to work?

Sigh ...

Anyway, BOINC Manager 6.5.0 is not perfect, but it is close as these more modern versions go. the next best version is 6.2.19 then 6.4.5 ... none of the 6.6.x versions will work long term. They each have issues that if they don't show up right away will soon bite you ... You can look at the long list of threads over at GPU Grid for more.

I am using 6.5.0 on all my GPU powered systems and have no specific complaints though at times it looks like the work fetch is not keeping the queue properly filled. However, in my case I am doing a concentrated effort on several projects in succession so that is less of an issue as I slowly run down my list of projects to "force up" my totals ... and as I get them to target levels fall back to support levels of Resource Share....

Anyway ... good luck with the tests ...
ID: 59425 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,214,047
RAC: 1,450
Message 59452 - Posted: 8 Feb 2009, 12:36:37 UTC - in response to Message 59425.  

I tried doing GPU grid and it stopped my Boinc crunching, with one of the cpus. I am using verion 6.4.5 and i know there are newer version that address some of these issues, but I really want to crunch for Rosetta with my GPU. Einstein would be okay too but currently neither one supports video card crunching. I am back to what works for me, at least for now.


BOINC Manager 6.5.0 fixes that problem that 6.4.5 had ...


I guess I am in for another test day! Thanks


Me too ...

I ran out of disk space and the drive started to develop directory errors which the tools won't fix because there is not enough space ... so I bought a set of 6 1.5 TB drives ... two of them are failing SMART ... and with only 4 I cannot rebuild my RAID 5 array as I used one for the OS ... so, do I re-install the OS for the 5th time (I installed it on one of the failing drives first, sigh) and go through all the work and time to recover all my settings (Again) so I can use the 4 drives that seem to work?

Sigh ...

Anyway, BOINC Manager 6.5.0 is not perfect, but it is close as these more modern versions go. the next best version is 6.2.19 then 6.4.5 ... none of the 6.6.x versions will work long term. They each have issues that if they don't show up right away will soon bite you ... You can look at the long list of threads over at GPU Grid for more.

I am using 6.5.0 on all my GPU powered systems and have no specific complaints though at times it looks like the work fetch is not keeping the queue properly filled. However, in my case I am doing a concentrated effort on several projects in succession so that is less of an issue as I slowly run down my list of projects to "force up" my totals ... and as I get them to target levels fall back to support levels of Resource Share....

Anyway ... good luck with the tests ...


Choices, choices, so many for you to chose from. I am assuming you have a disk image of your settings for your back up? Personally I would not install anything I use regularly on a known bad drive. I do install Windows and Boinc on bad drives BUT ONLY on Boinc only machines. That way if they crash, again, I really don't care and have only lost that one machine and the units on it. I put a mark on the drive when it crashes, after 2 marks the drive is wiped and put in a pile to have a hole drilled in it.

I went to download Boinc 6.5.0 and it is not on the list of available downloads. I looked at several Boinc projects and just don't see it. I click on all versions and it goes from 6.4.5 to 6.6.4. I have tried 6.4.5 and did not care for it, as discussed earlier, is 6.6.4 a viable option?
ID: 59452 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 59467 - Posted: 8 Feb 2009, 18:53:24 UTC - in response to Message 59452.  

Choices, choices, so many for you to chose from. I am assuming you have a disk image of your settings for your back up? Personally I would not install anything I use regularly on a known bad drive. I do install Windows and Boinc on bad drives BUT ONLY on Boinc only machines. That way if they crash, again, I really don't care and have only lost that one machine and the units on it. I put a mark on the drive when it crashes, after 2 marks the drive is wiped and put in a pile to have a hole drilled in it.


I do have a back up ... but, am still setting things up. I bought 6 new drives to have 2 of them fail SMART tests right away. So, they will be going back tomorrow. So, I am cleaning off one disk to make it the start up disk and that will let me take the one 1.5 TB that I put the OS on and put it into the RAID array. I did build an array with the other three 1.5 TB disks so they should be in good shape.

Another 4-6 hours moving files off the 500G drive will clear it and I can use a clone copy to move the files off the current start up disk to the old data disk ... then boot off the new start up disk and make sure that works ... then take the drive out put it into the right slot, whack the RAID array and build it anew ... THEN I can start to restore the data off the back up drives to the RAID array ...

When all that is done and I am confident that the data set is good I can take the 4 old 1 TB disks of the old RAID array and use them as disks else where ... like to make a larger clone of the start up disk and to have an "overflow drive" for stuff that is not that important to have on the RAID array ...

And when summer is here and I may build another system I will have a spare drive.

I went to download Boinc 6.5.0 and it is not on the list of available downloads. I looked at several Boinc projects and just don't see it. I click on all versions and it goes from 6.4.5 to 6.6.4. I have tried 6.4.5 and did not care for it, as discussed earlier, is 6.6.4 a viable option?


6.64 is a bad choice. 6.5.0 is the best alternative I have found. I have been using it for a month or so now and have no complaints though I think the work fetch is still unbalanced and it does not really properly handle CUDA vs CPU yet ... but with futzed Resource Shares I keep CUDA work on hand and that is good enough for now ...

There is a hidden page where you can get older versions... I always have to hunt around for it ... AH, try this
ID: 59467 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,214,047
RAC: 1,450
Message 59468 - Posted: 8 Feb 2009, 20:00:30 UTC - in response to Message 59467.  

Choices, choices, so many for you to chose from. I am assuming you have a disk image of your settings for your back up? Personally I would not install anything I use regularly on a known bad drive. I do install Windows and Boinc on bad drives BUT ONLY on Boinc only machines. That way if they crash, again, I really don't care and have only lost that one machine and the units on it. I put a mark on the drive when it crashes, after 2 marks the drive is wiped and put in a pile to have a hole drilled in it.


I do have a back up ... but, am still setting things up. I bought 6 new drives to have 2 of them fail SMART tests right away. So, they will be going back tomorrow. So, I am cleaning off one disk to make it the start up disk and that will let me take the one 1.5 TB that I put the OS on and put it into the RAID array. I did build an array with the other three 1.5 TB disks so they should be in good shape.

Another 4-6 hours moving files off the 500G drive will clear it and I can use a clone copy to move the files off the current start up disk to the old data disk ... then boot off the new start up disk and make sure that works ... then take the drive out put it into the right slot, whack the RAID array and build it anew ... THEN I can start to restore the data off the back up drives to the RAID array ...

When all that is done and I am confident that the data set is good I can take the 4 old 1 TB disks of the old RAID array and use them as disks else where ... like to make a larger clone of the start up disk and to have an "overflow drive" for stuff that is not that important to have on the RAID array ...

And when summer is here and I may build another system I will have a spare drive.

I went to download Boinc 6.5.0 and it is not on the list of available downloads. I looked at several Boinc projects and just don't see it. I click on all versions and it goes from 6.4.5 to 6.6.4. I have tried 6.4.5 and did not care for it, as discussed earlier, is 6.6.4 a viable option?


6.64 is a bad choice. 6.5.0 is the best alternative I have found. I have been using it for a month or so now and have no complaints though I think the work fetch is still unbalanced and it does not really properly handle CUDA vs CPU yet ... but with futzed Resource Shares I keep CUDA work on hand and that is good enough for now ...

There is a hidden page where you can get older versions... I always have to hunt around for it ... AH, try this


Thank you for the link, I have downloaded it and I will try it.
Also good luck on the Raid array. I tried doing a Raid one time but it never worked as it should have.
ID: 59468 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 59471 - Posted: 8 Feb 2009, 21:58:35 UTC - in response to Message 59468.  

Thank you for the link, I have downloaded it and I will try it.
Also good luck on the Raid array. I tried doing a Raid one time but it never worked as it should have.


Well, cleared a 500G drive and am now cloning the current boot drive to the cleared drive ...

The RAID array has been reasonable, expensive and difficult to work with at times, but, it pretty much works if you have the stars and moon aligned. One thing that got into the way was that the battery was being 'conditioned' and with installs I was rebooting so much that this process would have to start over ... finally I was in a state that it could finish ... and with two new drives failing SMART ... well, that did not help matters.

I did get 3 drives to make a RAID 5 array which I will now have do whack to add the additional drive.

At the moment I use the Apple RAID card for a RAID-5 array and SoftRAID for several stripe or mirror arrays ... I use a stripe array for my backup ... which is one place that I wish that SoftRAID would get their finger out and make a RAID 5 addition to their product ... at the moment all you can do for safety is a mirror with their software. sigh ...

And good luck to you with 6.5.0 ... you can find the link any time you need it ... just use the download page and copy the link of any of the downloads and strip off the file name and there you are ... or you could copy the 6.6.4 and change the numbers to the version you want to capture ...

Cheers ...
ID: 59471 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 59476 - Posted: 9 Feb 2009, 4:21:13 UTC
Last modified: 9 Feb 2009, 4:30:58 UTC

G-d bless some of those "crazy" people over at Folding@Home (and the obligatory: "yes", it CAN play Crysis!)

Atlas Folding Blog - Fighting Huntington's Disease with Folding@home


For those interested, I have successfully implemented an 8 GPU folding machine based upon nVidia GTX295s. This is the testbed for what will ultimately be a rackmount GPU system with 38 GTX295s dedicated 24/7 to F@H.

Atlas Folder's first node (Dagny) can hit a theoretical max of about 63,000 PPD utilizing ~7.152 TFLOPS of computing power.


ID: 59476 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5691
Credit: 5,859,226
RAC: 0
Message 59482 - Posted: 9 Feb 2009, 15:22:18 UTC

whats the cost for the hardware and what is power consumption for that beat?
ID: 59482 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 59485 - Posted: 9 Feb 2009, 17:31:37 UTC
Last modified: 9 Feb 2009, 17:36:35 UTC


Yes, the nodes like the one pictures are about $3,000.00 each but I'm building at least four of them. My budget is $15,000.00.

YashBudini, I hadn't thought of that! I'm glad I decided not to put it at my house! Instead I'm running the machines at my business which already has a pretty high power usage.



And apparently he's paying Canadian electric rates:

I must admit that 6.5 cents (CDN, 5.2 USD) per KW/H is a "good thing"



Further noting:

The system uses about 1125 Watts when the SMP client is running, 1070 when its not. The PPD/Watt of running the SMP client is low and it is pushing my power supply closer to its 1250W limit so I'm not running it. I'd like to, it adds another 1500-2000ppd, but it's too much for the time being.
ID: 59485 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5691
Credit: 5,859,226
RAC: 0
Message 59492 - Posted: 9 Feb 2009, 21:38:30 UTC - in response to Message 59485.  

main frame in a box at home...to much up front cost for my lowly budget.
very nice to look at though


Yes, the nodes like the one pictures are about $3,000.00 each but I'm building at least four of them. My budget is $15,000.00.

YashBudini, I hadn't thought of that! I'm glad I decided not to put it at my house! Instead I'm running the machines at my business which already has a pretty high power usage.



And apparently he's paying Canadian electric rates:

I must admit that 6.5 cents (CDN, 5.2 USD) per KW/H is a "good thing"



Further noting:

The system uses about 1125 Watts when the SMP client is running, 1070 when its not. The PPD/Watt of running the SMP client is low and it is pushing my power supply closer to its 1250W limit so I'm not running it. I'd like to, it adds another 1500-2000ppd, but it's too much for the time being.

ID: 59492 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Pharrg

Send message
Joined: 10 Jul 06
Posts: 10
Credit: 6,478
RAC: 0
Message 59622 - Posted: 17 Feb 2009, 1:42:34 UTC

Wow! Must be nice. Of course, I don't have a $15,000 budget. However, I currently have a pair of GTX 260's, and the speed gain I've seen on some CUDA enable projects have been so impressive, I'm thinking of replacing them with 3 GTX295's. My motherboard will support true 3-way SLI with all 3 slots at full x16. Of course, when running CUDA, you have to disable SLI or only 1 card gets used, but that's a simple mouse click to do.

As for speeds.... 3 nVidia GTX 295's... Each board has 480 cores in parallel and almost 2Gb of dedicated DDR3 RAM. Each is capable of 1.788 Teraflops of processing power. Altogether, by running 3, this will give my machine 1440 cores and a whopping 5.364 Teraflops of power! That's faster than many early Cray supercomputers still in operation were! And, it will leave your CPU nearly idle for doing other tasks!

Do the math to see what his machine is going to be capable of... now that's extreme! Lots of great science can be done with these leaps of processing power. If AMD doesn't go bankrupt soon, perhaps the ATI Firestream technology will take off too, or perhaps OpenCL.



ID: 59622 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Pharrg

Send message
Joined: 10 Jul 06
Posts: 10
Credit: 6,478
RAC: 0
Message 59623 - Posted: 17 Feb 2009, 2:08:50 UTC

Oh, I forgot, in my limited testing with CUDA processing, I've traced pretty much all errors I've seen, including 'nvlddmkm' driver errors, driver recovery errors, BSOD's, and other crashes when trying to run CUDA to temperatures. This is especially a problem on the high end GTX 200 series of cards. Even with the cards being two slots thick, they just can't squeeze a big enough heatsink and fan into it. Just look at how large CPU coolers have gotten in comparison for much less computing power. I installed a Cooler Master V8 cooler on my Core i7 and that thing is a monster. But, I must admit, it works extremely well. In fact, it keeps my CPU cooler under a load than the stock cooler kept it at idle.

I found that once I maxed out my fan speeds and cooling on the video cards, the errors stopped. Many extreme gamers have run into the same issues on these boards as well. Lots of people try to blame the software or GPU's themselves, when really it's a simple case of overheating. Even games like Crysis with all settings maxed are nothing compared to what CUDA is capable of doing to a GPU when efficient code is used. The type of algorithms and processes will make a difference too, so you'll see different results from different projects and apps. It will drive your card hard. Like I said, once I figured this out and worked to keep my cards cool, I haven't had a crash since.


ID: 59623 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 59681 - Posted: 20 Feb 2009, 12:15:51 UTC
Last modified: 20 Feb 2009, 13:09:57 UTC

From our friends at Folding@Home:

Folding@home passes the 5 petaflop mark


Based on our FLOP estimate (see http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats), Folding@home has passed the 5 petaflop mark recently. To put it in context, traditional supercomputers have just broken the 1 petaflop mark, and even that level of performance is very challenging to aggregate. The use of GPU's and Cell processors is has been key to this, and in fact the NVIDIA numbers alone have just passed 2 petaflops.

Thanks to all who have contributed and we look forward to the next major milestones to be crossed!


and

New paper #63: Accelerating Molecular Dynamic Simulation on Graphics Processing Units


We're happy to announce a new paper (#63 at http://folding.stanford.edu/English/Papers). This paper describes the code behind the Folding@home GPU clients, detailing how they work, how we achieved such a significant speed up on GPUs, and other implementation details.

For those curious about the technical details, I've pasted our technical abstract below:

ABSTRACT. We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core.

Also, this software is now available for general use (for scientific research outside of FAH). Please go to http://simtk.org/home/openmm for more details.



Here are some "fair use" tid-bits:


Graphics processing units (GPUs) originated as specialized hardware
useful only for accelerating graphical operations, but they
have grown into exceptionally powerful, general purpose computing
engines. Modern GPUs far exceed CPUs in terms of raw
computing power.1,2 As a result, the use of GPUs for general
purpose computing has become an important and rapidly
growing field of research. Many important algorithms have
been implemented on GPUs, often leading to a performance
gain of one to two orders of magnitude over the best CPU
implementations.3

Algorithms used for
MD are traditionally evaluated based on how they scale with the
number of atoms being simulated, but scaling considerations are
only meaningful when the number of atoms is large compared
to the number of math units. GPUs have already reached a point
where, for small or medium sized proteins, the number of math
units may be comparable to the number of atoms. On such a
processor, the total amount of computation to be done may be
much less important than how fully the available processing
resources can be utilized.

In contrast, GPUs have only a very small amount of special
purpose cache memory and hide latency with massive multithreading.
Programs cannot rely on caches to hide latencies from
random memory access. Instead, it is absolutely essential to
group related data together and access it in contiguous blocks.
In many cases, it is more efficient to repeat a calculation than to
store the result in memory and reload it later.

Because our GPU implementations have been developed over
a period of time, some of the latest advances in GPU hardware
have not been fully exploited. For instance, recent ATI hardware
permits ‘‘scatter’’ operations, which involve writing to different
memory locations within a kernel. Our ATI implementation has
avoided scatter operations because they were not available
on earlier generation hardware. It might be possible to achieve
greater computation efficiency by reengineering certain methods
to take advantage of scatter operations. Another recent advance
in GPU computing is support for double-precision floating
point computations on ATI and NVIDIA GPUs. Double precision
arithmetic still carries a significant performance penalty relative
to single-precision arithmetic. It may be worth investigating
situations in which the increased accuracy of higher precision
arithmetic might be worth the additional computational
cost.

More generally, there are undoubtedly many additional methods
that could be implemented on GPUs to extend the range of
available simulation scenarios. For example, more sophisticated
force fields, such as polarizable force fields,27 could benefit from
GPU acceleration. The precise details of how best to implement
these methods remain to be worked out. We hope that our current
work will help form the basis for an ever increasing library
of GPU accelerated molecular dynamics techniques.

However, it immediately became
clear that exploiting architectural features of CUDA allowed for
significantly more efficient execution

In summary, realizing the full potential of the GPU still
requires considerable effort in reworking the data structures and
code to take advantage of the particular GPU architecture, and
not all algorithms are amenable to these types of architecture.
ID: 59681 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 59698 - Posted: 21 Feb 2009, 6:16:41 UTC

Just as interesting to me is the offer when I installed the drivers for my brand new ATI card to install Folding@Home ... now why did not the BOINC guys get us on that list?

I did not install FaH, as I was solely interested in getting Milkyway to run a shade faster ... sadly, and most disappointing, it takes up to 13-14 seconds to run a task ... almost not worth getting an ATI GPU ...

On the other hand, aside from the Linux system which I should be turning off, all my systems now have a GPU running work for me ...

Still need more project though ... GPU Grid won't let you stock up more than 4 tasks and my one system will run through those in 6 hours, so, even a half day outage means I run dry ... sigh ...
ID: 59698 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 59699 - Posted: 21 Feb 2009, 11:17:04 UTC

miklyway, isnt that rpi's project? my s/o is an alum from there.

whats the deal with gpugrid? tried it with a 9600 gso on a older amd monocore, and it tied up boinc... when gpugrid was running on the gpu, it was also the only wu running on boinc, displacing rosie. thought i could have gpugrid on the gpu, and rosie on boinc, both at the same time... just out of curiosity, since i've never actually finished a gpugrid wu, how are they on points?
ID: 59699 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,214,047
RAC: 1,450
Message 59700 - Posted: 21 Feb 2009, 11:18:13 UTC - in response to Message 59698.  

Just as interesting to me is the offer when I installed the drivers for my brand new ATI card to install Folding@Home ... now why did not the BOINC guys get us on that list?


BUT at least one Project is trying to get more people. It takes someone to be the first, hopefully this is but the first ripple in the tidal wave to come!
ID: 59700 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 59711 - Posted: 21 Feb 2009, 22:34:39 UTC - in response to Message 59699.  

miklyway, isnt that rpi's project? my s/o is an alum from there.

whats the deal with gpugrid? tried it with a 9600 gso on a older amd monocore, and it tied up boinc... when gpugrid was running on the gpu, it was also the only wu running on boinc, displacing rosie. thought i could have gpugrid on the gpu, and rosie on boinc, both at the same time... just out of curiosity, since i've never actually finished a gpugrid wu, how are they on points?


OK, if you run 6.4.5 or earlier you have to use a cc_config file to make things work as they should. You have to run at least 6.2.19 (I think). If you listen to Paul (few do), you will run 6.5.0 on windows which does not require any special configuration files. Video drivers, win32 should be 181.22 ...

Then, you should be good to go... On my i7 I usually have 14 tasks in flight ... 8 CPU tasks, 4 GPU tasks (2 each GTX295 cards), QCN and FreeHAL all at the same time ...

Each GPU Grid task takes between 4 and 30 hours depending on the card and the task and pays 2K and change to 3K and change ...

With my i7, Q9300, 2 Dual CPU (HT capable) Dells, and my Mac Pro I was doing about 12-20K per day (24 cores total) ... I added a total of 6 GPU Cores (9800GT, GTX 280, 2 Ea. GTX 295 (2 cores per)) and my total went up to 60-80 some K per day (See Willy's Stats for Paul D. Buck (the earliest stats show some GPUs running already).

With the ATI card I am jumping up again. As it has run for less than 24 hours and I was still doing work on other computers not sure what the ATI card is going to do per day yet...

So, by populating all the PCI-e slots I got I have increased production (as measured by CS) by a factor of 4 or more ...

Your 9600 card will likely take longer than my 9800 GT and I would expect that it would take about 20-24 hours per task or maybe a little more. Still, it is "free" in the sense that you can add production just by attaching to one more project.

So, step one, make sure you are running the right drivers, two, 6.5.0 BOINC Manager, 3) attach to GPU Grid, 4) rake in the work ... :)
ID: 59711 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 59714 - Posted: 22 Feb 2009, 4:29:24 UTC

trouble finding link to Boinc Manager 6.5.0.

currently have 6.4.5 = no go with cpu + gpu.
ID: 59714 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 59715 - Posted: 22 Feb 2009, 4:40:46 UTC - in response to Message 59714.  
Last modified: 22 Feb 2009, 4:42:18 UTC

trouble finding link to Boinc Manager 6.5.0.

currently have 6.4.5 = no go with cpu + gpu.


You have to use the generic download list to find the version you need.

I always go to the DL page, copy the download link they present and then paste it cut off the file name to get the generic list.

{edit}
Don't forget to remove that config file when using 6.5.0 as it is not needed. BTW, teh MW app is so alpha that it is not affected by the manager version ...
ID: 59715 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 59716 - Posted: 22 Feb 2009, 4:47:51 UTC

ok, will play with tomorrow.

want to leave my three ps3's on f@h, and try my two gpu's on boinc gpugrid.

have seven more gpu's (all 9600 series, got 'em cheap) eventually to come online.
ID: 59716 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Previous · 1 · 2 · 3 · Next

Message boards : Number crunching : A SINGLE Graphics Card = 1 teraFLOPS



©2024 University of Washington
https://www.bakerlab.org