Rosetta & Parallelization (gaming consoles)

Message boards : Rosetta@home Science : Rosetta & Parallelization (gaming consoles)

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48332 - Posted: 4 Nov 2007, 2:34:39 UTC
Last modified: 4 Nov 2007, 2:45:33 UTC

With apologies to "Admin" for uninentionally threadjacking.

This topic is intentionally being posted in the Science section rather than Number Crunching, in the hope that someone from the Project will read and contribute.

The story so far - from David Baker:

yes, the size of the code is a problem for getting up and running. we have a new much cleaner and for now smaller version which we will be sending out on rosetta@home soon; might be a better starting point for code optimization experts interested in helping out rosetta@home!
ID: 48332 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48333 - Posted: 4 Nov 2007, 2:35:39 UTC
Last modified: 4 Nov 2007, 2:47:18 UTC

From The_Bad_Penguin:

Would someone from the project offer the professional courtesy of providing a definitive answer, with respect to Rosetta's memory footprint and adaptability for parallel processing on gaming consoles, ala this thread?

Thanking you in advance.
ID: 48333 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48334 - Posted: 4 Nov 2007, 2:36:51 UTC

From svincent:

My 2 cents on this topic, which seems to crop up in one form or another quite frequently.

I don't know the specifics of why Rosetta hasn't done this yet but I have worked on optimizing large projects in the past and can suggest some of the issues they might be faced with.

If it was easy to use SIMD instructions like SSE3, etc. in Rosetta I imagine it would have already been done, but the fact is many algorithms just don't lend themselves to easy data-level parallelization. Some do, like those in Photoshop filters or digital signal processing, but if the next step in the process is always dependent on the results of the previous step SIMD doesn't help, and from what little I know of this type of molecular simulation software this will be true for Rosetta. I'm sure people have looked hard at the innermost loops of Rosetta and concluded that either it couldn't be vectorized or that the effort needed to do so would be better spent elsewhere.

Even if the above is incorrect there are other issues to consider.

Maintaining a code base containing SIMD code (writing it in the first place is quite a specialized skill) has its painful aspects. It's necessary to write multiple versions of the same routine, one for each kind of instruction set that's out there. If anything in the code needs changing it's necessary to change all the routines. For this reason you'd probably only want to implement code using SIMD when the code is mature and almost certain not to change. Not a show stopper but something that needs to be taken into account.

These problems are compounded when you try to convert software written for a general purpose CPU to run on a GPU or something like the Cell processor in the PS3. The specs of these processors may look impressive but they are somewhat restricted in what they can do relative to a CPU and programming them requires an entirely different way of thinking about the problem: they have the reputation of being very difficult to program effectively, and it would probably involve a major rewrite of Rosetta to get it working. F@H seems to have overcome these difficulties but even there, the programs that can be run on the PS3 or on ATI graphics cards are only a subset of those that can be run on a general purpose CPU.

Ultimately I imagine this comes down to a question of what is the most effective use of Rosetta's programming resources. Is it better to fix bugs and add refinements that will improve the accuracy of the predicted structures or to invest the resources needed to make it run on faster hardware? Right now probably the former: Rosetta is after all a research project. Perhaps in the future when the project is complete ( if it ever is ) and passed off to the World Community Grid this will change.
ID: 48334 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48335 - Posted: 4 Nov 2007, 2:38:11 UTC

From The_Bad_Penguin:

Thank you for a well thought out response.

For example, how was this possible?

"When David Baker, who also serves as a principal investigator for Howard Hughes Medical Institute, originally developed the code, it had to be run in serial - broken into manageable amounts of data, with each portion calculated in series, one after another.

Through a research collaboration, SDSC's expertise and supercomputing resources helped modify the Rosetta code to run in parallel on SDSC's massive supercomputers, dramatically speeding processing, and providing a testing ground for running the code on the world's fastest non-classified computer.

The groundbreaking demonstration, part of the biennial Critical Assessment of Structure Prediction (CASP) competition, used UW professor David Baker's Rosetta Code and ran on more than 40.000 central processing units (CPUs) of IBM's Blue Gene Watson Supercomputer, using the experience gained on the Blue Gene Data system installed at SDSC."


but the fact is many algorithms just don't lend themselves to easy data-level parallelization. Some do, like those in Photoshop filters or digital signal processing, but if the next step in the process is always dependent on the results of the previous step SIMD doesn't help, and from what little I know of this type of molecular simulation software this will be true for Rosetta. I'm sure people have looked hard at the innermost loops of Rosetta and concluded that either it couldn't be vectorized or that the effort needed to do so would be better spent elsewhere.




Agreed.

These problems are compounded when you try to convert software written for a general purpose CPU to run on a GPU or something like the Cell processor in the PS3. The specs of these processors may look impressive but they are somewhat restricted in what they can do relative to a CPU and programming them requires an entirely different way of thinking about the problem: they have the reputation of being very difficult to program effectively, and it would probably involve a major rewrite of Rosetta to get it working. F@H seems to have overcome these difficulties but even there, the programs that can be run on the PS3 or on ATI graphics cards are only a subset of those that can be run on a general purpose CPU.



Also agreed.

But, if this is true, why did they bother to make the effort to convert Rosetta to parallelized code to run on supercomputers / IBM Blue Gene?

It would seem that the PS/3 (as opposed to pc's - multiple different cpu's, os's, amounts of ram and hdd) is both standardized and an open platform.

F@H is at the petaflop level. I doubt Baker Labs would turn down petaflop level potential.

Arguably, it would be resources well spent.

I'd really be curious to hear what DB himself has to say, now that the PS/3 is on the 65nm chip, with double precision and reduced watts.

Ultimately I imagine this comes down to a question of what is the most effective use of Rosetta's programming resources. Is it better to fix bugs and add refinements that will improve the accuracy of the predicted structures or to invest the resources needed to make it run on faster hardware? Right now probably the former: Rosetta is after all a research project. Perhaps in the future when the project is complete ( if it ever is ) and passed off to the World Community Grid this will change.


ID: 48335 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48336 - Posted: 4 Nov 2007, 2:39:35 UTC

From svincent:



Also agreed.

But, if this is true, why did they bother to make the effort to convert Rosetta to parallelized code to run on supercomputers / IBM Blue Gene?


Just a guess, but perhaps the parallelism there was at the same level as that we're seeing in the BOINC version of Rosetta, with one decoy being assigned to each CPU. Memory constraints, etc, permitting; that might be a relatively easy thing to implement.


It would seem that the PS/3 (as opposed to pc's - multiple different cpu's, os's, amounts of ram and hdd) is both standardized and an open platform.


True, but in addition to the issues I mentioned, F@H comes preinstalled on PS3's and I imagine users would be more likely to run that than R@H, even if the latter were available.


F@H is at the petaflop level. I doubt Baker Labs would turn down petaflop level potential.


Other things being equal, I doubt he would either!


Arguably, it would be resources well spent.


Perhaps as stream computing becomes more mainstream porting to these platforms will become a more attractive option. Still I don't see it happening soon.


I'd really be curious to hear what DB himself has to say, now that the PS/3 is on the 65nm chip, with double precision and reduced watts.


ID: 48336 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48337 - Posted: 4 Nov 2007, 2:40:41 UTC

From Viking69:

Rosetta is after all a research project. Perhaps in the future when the project is complete ( if it ever is ) and passed off to the World Community Grid this will change.



I can't see a time when this project would ever be complete, but possibly replaced with a more powerful project as the technology advances.

I don't understand your sayning that if this project did 'complete' that it would be passed on to WCG. WHY? All that does is process work in a similar vein to all the other BOINC projects, although it started up as another system to join peoples PC's together to process data. Why would Rosetta pass its work off to that system?

ID: 48337 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48338 - Posted: 4 Nov 2007, 2:41:35 UTC

From The_Bad_Penguin:

Unable to do same with the 6 SPE's in the PS/3's Cell BE ? Memory restrictions?


Just a guess, but perhaps the parallelism there was at the same level as that we're seeing in the BOINC version of Rosetta, with one decoy being assigned to each CPU. Memory constraints, etc, permitting; that might be a relatively easy thing to implement.



I think Sony is desperate for sales, they lost what, $500 million on the PS/3 hardware so far? I'd imagine they'd have to help with the porting, and putting a R@H icon on the PS/3 is likely a small price to boost sales to crazy people like me. As time goes on, I do intend to purchase multiple PS/3's. $399 is a bargin! And it'll all go to F@H until something else comes along that will use all 6 of the SPE's.

True, but in addition to the issues I mentioned, F@H comes preinstalled on PS3's and I imagine users would be more likely to run that than R@H, even if the latter were available.



No pain, no gain. F@H chanced it, and they're at petaflop, and I have no doubt they'll hit 2 pflops within a year.

Can this potential really be ignored?

If PS/3 requires a larger memory footprint than the PS/3 offers, you can't get blood from a (Rosetta) stone.

Just someone from the project come out and make the definitive statement that this is the case.

And if they can't make such a definitive statement, lets at least have a discussion on other potential concerns.

Perhaps as stream computing becomes more mainstream porting to these platforms will become a more attractive option. Still I don't see it happening soon.


I'd still like the good Doc himself to weigh in.

ID: 48338 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48339 - Posted: 4 Nov 2007, 2:42:19 UTC

From svincent:

I can't see a time when this project would ever be complete, but possibly replaced with a more powerful project as the technology advances.

I don't understand your sayning that if this project did 'complete' that it would be passed on to WCG. WHY? All that does is process work in a similar vein to all the other BOINC projects, although it started up as another system to join peoples PC's together to process data. Why would Rosetta pass its work off to that system?


I agree that my original sentence could have been phrased better, but the WCG already makes use of Rosetta, although I don't know what version they use. See

http://www.worldcommunitygrid.org/projects_showcase/viewHpf2Research.do

ID: 48339 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48340 - Posted: 4 Nov 2007, 2:43:14 UTC

From svincent:

I think Sony is desperate for sales, they lost what, $500 million on the PS/3 hardware so far? I'd imagine they'd have to help with the porting, and putting a R@H icon on the PS/3 is likely a small price to boost sales to crazy people like me. As time goes on, I do intend to purchase multiple PS/3's. $399 is a bargin! And it'll all go to F@H until something else comes along that will use all 6 of the SPE's.


My understanding was that the PS3 is actually sold right now at a loss as Sony are hoping to push Blu-Ray and sell games along with the console.

The following paper discusses scientific programming on the PS3. I skipped the gruesome technical bits in the middle and read just the introduction and summary, where the weaknesses of the processor are discussed.

http://www.netlib.org/utk/people/JackDongarra/PAPERS/scop3.pdf

ID: 48340 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48341 - Posted: 4 Nov 2007, 2:44:07 UTC
Last modified: 4 Nov 2007, 2:48:40 UTC

From The_Bad_Penguin:

Thanx, think I had previously read (and dl'ed) a copy of this, but I am currently re-reading.

Yes, IIRC Sony was taking about a $250 loss on each PS/3. This may be less now, with the increased yields of the new 65nm CBEAs.

Thats why I say the new 40gb PS/3 with the 65nm CBEA is a bargin at $399, and I would not hesitate to purchase multiple units over time, as finances permit.

Sorry that Sony will lose money on me, as I am not a gamer, and regular DVDs are fine for me.

I would be purchasing it strictly as a (super)computer.

Right now, today, not some unknown time in the future, F@H is able to use it, and thats good enough for me.

Hope one day that Rosie will as well.

My understanding was that the PS3 is actually sold right now at a loss as Sony are hoping to push Blu-Ray and sell games along with the console.

The following paper discusses scientific programming on the PS3. I skipped the gruesome technical bits in the middle and read just the introduction and summary, where the weaknesses of the processor are discussed.

http://www.netlib.org/utk/people/JackDongarra/PAPERS/scop3.pdf


ID: 48341 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48342 - Posted: 4 Nov 2007, 3:53:21 UTC
Last modified: 4 Nov 2007, 4:39:44 UTC

I have (re)read the academic paper suggested by svincent.


First, I would like to note that there are at least three instances of the PS/3 being used for "scientific programming" that I am aware of.

(1) Most people are already aware of F@H.

(2) BOINC project PS3Grid I am a little uncertain about. My initial understanding was that the SPE's were not used, as the PS/3 was run in Linux mode. However, browsing their website, I came across: A Cell optimized application like CellMD reaches over 30 Gflop/s on a Cell processor (over 25 Gflop/s on the PlayStation3 due to the fact that only 6 SPEs out of 8 can be used). With 8 SPEs, the speed-up is 19 times an equivalent scalar implementation on an AMD Opteron 2Ghz.

(3) Additionally, as reported by Wired Magazine on October, 17, 2007, an interesting application of using PlayStation 3 in a cluster configuration was implemented by Astrophysicist Dr. Gaurav Khanna who replaced time used on supercomputers with a cluster of eight PlayStation 3s.

Rosie should note that this is not necessarily unchartered waters.



The following paper discusses scientific programming on the PS3. I skipped the gruesome technical bits in the middle and read just the introduction and summary, where the weaknesses of the processor are discussed.

http://www.netlib.org/utk/people/JackDongarra/PAPERS/scop3.pdf




I note the following:

(A) The main focus of the paper is on creating a "cluster" of PS/3's for the purpose of "scientific programming".

While perhaps a noble, ultimate, goal, I would believe that the initial goal is to get a project to benefit from the 6 SPE's in a single PS/3 rather than the 48 SPE's in a cluster of eight PS/3's.


(B) The paper is VERY weak in discussing the PS/3 and F@H. A mere two paragraphs, which says even less.


(C) The paper identifies five major "limitations" in the use of PS/3's for scientific programming:

(1) "Main memory access rate. The problem applies to the CELL processor and frankly speaking most modern processors, and is due to the fact that execution units can generate floating point results at a speed that is much higher than the speed at which the memory can feed data to the execution units."

Not endemic to only the Cell or PS/3, Rosie can ignore this "limitation".


(2) "Network interconnect speed. The PlayStation 3 is equipped with a GigaBit Ethernet network interface... The bottom line is that only computationally intensive applications can benefit from connecting multiple PS3s together to form a cluster. Computation, even as floating point intensive as dense matrix multiply, cannot be effectively parallelized over many PS3s."

As Rosie's initial goal should be an app working on a single PS/3 rather than a cluster of PS/3's, this "limitation" can be ignored.


(3) "Main memory size. The PlayStation 3 is equipped with only 256 MB of main memory. This represents a serious limitation when combined with the slowness of the network interconnect."

Again, initial goal is not for clusters of PS/3'a via network interconnect, but a single PS/3.

That being said, the size of main memory IS a limitation, and it may in fact be fatal to any efforts to port Rosie over to the PS/3. However, I have yet to hear anyone officially affiliated with the Project make a definitive statement that this is a fact.


(4) "Floating point arithmetic shortcomings. Peak performance of double precision floating point arithmetic is a factor of 14 below the peak performance of single precision."

True, and false. True of the PS/3's with the original version of the CBEA. False for the newer PS/3's which are being built with the new 65nm version of the Cell BE.

Not a limitation to Rosie.


(5) "Programming paradigm. One of the most attractive features of the CELL processor is its simple architecture and the fact that all its functionalities can be fully controlled by the programmer. In most other common processors performance can only be obtained with a good exploitation of cache memories whose behavior can be controlled by the programmer only indirectly."

This is likely a legitimate "limitation" for Rosie.


Unless there are additional "limitations" of the new 65nm Cell BE in a PS/3, then the out of the five "limitations" listed in the paper, only two are potentially applicable to Rosie.

If the memory footprint required by Rosie in greater than what the PS/3-CBEA can provide, there is little that can be done.

Alternatively, if Rosie can fit within the PS/3-CBEA memory footprint, then the "only" remaining limitation is the programming.

Note the quotes around "only".

I am aware of the effort that would be involved.

From Wikipedia:

In November 2006, David A. Bader at Georgia Tech was selected by Sony, Toshiba, and IBM from more than a dozen universities to direct the first STI Center of Competence for the Cell Processor. This partnership is designed to build a community of programmers and broaden industry support for the Cell processor. There is a Cell Programming tutorial video available.


All I am suggesting is that at some point in the Cost Benefit Analysis, there H-A-S to be a point where the expenditure of resources is justified in porting Rosie to PS/3.

What is the Project's answer to this question? 500 tflops? 1 pflop? 25 pflops?

I can't believe the Project would say the expenditure of resources could not be justified for a yottaflop !!!
ID: 48342 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 11 Feb 06
Posts: 316
Credit: 6,621,003
RAC: 0
Message 48344 - Posted: 4 Nov 2007, 4:12:53 UTC

To be clear, F@H, PS3GRID, and yoyo@home applications all use the SPEs. SIMAP has a generic PPC/linux application that will run on the PS3, but it uses only the PPC/PPE controller, and has the expecte performance of a G4 Mac. Someone on SETI@home created a non-SPE app, but I haven't seen any further development with it since S@H moved to multibeam. Hydrogen@home has plans for a PS3 application, but no details or timeline.

Reno, NV
Team: SETI.USA
ID: 48344 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48345 - Posted: 4 Nov 2007, 4:19:01 UTC - in response to Message 48344.  
Last modified: 4 Nov 2007, 4:20:46 UTC

Thanx for filling in some of the blanks for me, I was unaware of these others projects and their use of the PS/3-CBEA.

When I get a second PS/3, I'll have to look at adding it to yoyo.
ID: 48345 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48355 - Posted: 4 Nov 2007, 17:35:02 UTC
Last modified: 4 Nov 2007, 17:36:42 UTC

.
ID: 48355 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48364 - Posted: 5 Nov 2007, 0:29:43 UTC
Last modified: 5 Nov 2007, 0:35:32 UTC

A sort of "good news / bad news" report from Engadget:

Sony says the 40GB PS3 is still using 90nm chips

Posted Nov 3rd 2007 2:06AM by Nilay Patel
Filed under: Gaming

We'd been hearing that Sony's new 40GB PS3 featured a revised design with a 65nm Cell processor and improved cooling, but sadly it looks like those reports were in error -- a Sony spokesperson has told Heise Online that the 40GB model continues to use 90mn processors, but does feature an updated design with a lower power consumption of just 120 to 140 watts, compared to 180 to 200 watts for the older models. Sony says its still planning on moving to 65nm processors in the near future, but for now, it looks like the PS3 is 90nm across the board.


Bad News = Apparently for the immediate time being, PS/3's are still using the older 90nm chip (no double precision, no power/heat savings from the chip itself)

Good News = Sony claims that even without the newer 65nm Cell BE chip, watts went down from about 200 to 140. Now, imagine how many more watts will be saved once the 65nm actually does replace the 90nm !!!
ID: 48364 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 48596 - Posted: 12 Nov 2007, 22:14:27 UTC - in response to Message 48364.  

A updated posting from The Inquirer:

"Whilst previous rumours of an 65nm PS3 were seemingly quashed by Sony at a later date, it seems that Sony hadn't been 100 per cent truthful.

In an interview with Japanese site AV Watch, Sony Computer Entertainment President and CEO Kaz Hirai has given the final word on the new console's innards.

It seems the 40GB PS3s out there, do have 65nm CPUs - coupled with vanilla 90nm GPUs (the PS3 RSX chip).

This fully explains the reduced power consumption numbers posted by the new 40GB PS3, which left many scratching their heads once Sony dismissed earlier rumours of a die shrink."




A sort of "good news / bad news" report from Engadget:

Sony says the 40GB PS3 is still using 90nm chips

ID: 48596 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 11 Feb 06
Posts: 316
Credit: 6,621,003
RAC: 0
Message 48740 - Posted: 17 Nov 2007, 9:58:54 UTC

Great news! No need to install linux on the HD of the PS3 any more. PS3GRID now has a "live" version you run off of a usb thumb drive, just like you would do with any linux live CD. This "live" thumb drive works with any other projects with PS3 apps like yoyo@home.

http://www.ps3grid.net/PS3GRID/forum_thread.php?id=99
Reno, NV
Team: SETI.USA
ID: 48740 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Dotsch
Avatar

Send message
Joined: 12 Feb 06
Posts: 111
Credit: 241,803
RAC: 0
Message 50183 - Posted: 30 Dec 2007, 14:15:19 UTC - in response to Message 48344.  

Someone on SETI@home created a non-SPE app, but I haven't seen any further development with it since S@H moved to multibeam.

The SETI PS3 application is a SPE version, too.

ID: 50183 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 50413 - Posted: 7 Jan 2008, 1:12:34 UTC
Last modified: 7 Jan 2008, 1:38:11 UTC

I've just completed re-allocating the resource share among my Boinc projects.

I was doing virtually 100% Simap for awhile, now thats down to about 50%.

I'm giving Rosie about a 25% resource allocation.



And, for the record, I am willing to put my money where my mouth is:


A perfect place to post the wonderful news that as of 12:52 p.m. on January 4, 2008, I am now the proud parent of twins, a "boy" and a "girl"...

I'm thinking of naming them "Rosie" and "Ralph".



Twin ps/3's that is, lol !!! Picked up two 40gb ps/3's @ $299 ea (long story, but the price obviously can't be beat).

Not to mention the two ATI x1950pro gpu's that I already had purchased (another long story, but the cost for both of them together will eventually be $25).



<rant>

How 'bout it Rosie, you wanna piece of that power ?!

Then do something, 'cause for now, its all going to go to F@H, which already has over 1 pflop.

When do you expect to reach a pflop?

I expect my total RAC equivalent, with the addition of two ps/3's to my existing crunchers, to be close to 10,000.

What is my current Rosie RAC? Single digit, down from quadruple digits !!!

(PS/3's on Boinc projects (PS3Grid, Yoyo OGR-25) seem to be averaging between 3000 - 3500 credits per 24 hours of crunching. Two ps/3's = ~ 6,000 - 7,000 credits/day)

I'll probably try installing Yellow Dog Linux / Boinc on at least one of them in the near future.

</rant>




I am already planning the purchase of a third ps/3.
Defeat Censorship! Wikileaks needs OUR help! Learn how you can help (d/l 'insurance' file), by clicking here. "Whoever would overthrow the liberty of a nation must begin by subduing the freeness of speech" B. Franklin
ID: 50413 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Michael G.R.

Send message
Joined: 11 Nov 05
Posts: 264
Credit: 11,247,510
RAC: 0
Message 50483 - Posted: 9 Jan 2008, 4:53:55 UTC

It's very cool that you are purchasing a lot of crunching power and dedicating it to life science DC projects.

I also wish that Rosetta would add things like SSE/SSE2/etc and PS3/GPU support, but I also understand that it is a research project that is constantly changing the scientific part of the code and that - unlike some other projects with fairly stable scientific code - the best way to get results might not always be to to speed up the crunching.

Spending time writing better scientific code can have more value than spending time optimizing for SSE or porting to GPU because it allows the code to do things that it couldn't otherwise do, even with more FLOPS, and thus can help discover medical breakthroughs.

Of course in a perfect world the Rosetta team would be doing both the scientific and optimizations at the same time, but I can understand that they might not be able to do both, or that it might be progressing slower than I might like.

I do wish the project people would communicate a bit better as far as optimizations go, though. Just giving us a status update would be much appreciated.
ID: 50483 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
1 · 2 · Next

Message boards : Rosetta@home Science : Rosetta & Parallelization (gaming consoles)



©2024 University of Washington
https://www.bakerlab.org