Rosetta@home and emerging computer technologies

Message boards : Rosetta@home Science : Rosetta@home and emerging computer technologies

To post messages, you must log in.

AuthorMessage
Profile Paul Panks

Send message
Joined: 5 Jun 06
Posts: 2
Credit: 385
RAC: 0
Message 17647 - Posted: 5 Jun 2006, 4:47:23 UTC
Last modified: 5 Jun 2006, 4:49:00 UTC

I recently joined Rosetta@home (today, in fact) and I am curious as to how the project will evolve as emerging computer technologies are incorporated into the combined global computing resources?

The Wiki article said that the goal for the project (in terms of speed) is 150 T-Flops. Am I correct in guessing that ~ 55,000 computers are <= 150 T-Flops? Or is it >= 150 T-Flops?

Also, how does Rosetta determine the potential effectiveness of each individual computer? Let's say, to use a plain example, that I've got a TRS-80, my neighbor has a Commodore 64 and Joe across the street has an Atari ST.

Does the relative perform "ceilings" of each individual computer "bog down" the overall computation speed of the project (in terms of T-Flops)?

I'm assuming that computing time (overall) is maximized whenever possible in each individual case, but what constitutes "optimum" computing time for each unit, when virtually every computer has differing RAM, Hard Drive and operating system capabilities?

Will emerging computer technologies make better use of each individual unit?

Thanks,

Paul
ID: 17647 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Dimitris Hatzopoulos

Send message
Joined: 5 Jan 06
Posts: 336
Credit: 80,939
RAC: 0
Message 17649 - Posted: 5 Jun 2006, 5:03:47 UTC - in response to Message 17647.  
Last modified: 5 Jun 2006, 5:05:50 UTC

The Wiki article said that the goal for the project (in terms of speed) is 150 T-Flops. Am I correct in guessing that ~ 55,000 computers are <= 150 T-Flops? Or is it >= 150 T-Flops?


There is a TeraFLOPS estimate on the project's homepage, currently at 32TFLOPS. In addition to that, I like to watch the # of active hosts at http://www.boincstats.com/stats/project_graph.php?pr=rosetta

Does the relative perform "ceilings" of each individual computer "bog down" the overall computation speed of the project (in terms of T-Flops)?

I'm assuming that computing time (overall) is maximized whenever possible in each individual case, but what constitutes "optimum" computing time for each unit, when virtually every computer has differing RAM, Hard Drive and operating system capabilities?


Basically, the speed of individual contributing CPUs is not so important, as long as they can finish a workunit (default 4 CPU hours) within project deadlines (usually 2 weeks, recently less due to the CASP experiment). The slower ones are not dragging down the faster ones.

Maybe it helps a lot to think in terms of Rosetta's favorite paradigm (https://boinc.bakerlab.org/rosetta/rah_welcome.php): the explorers on the surface of a planet, looking for the lowest elevation point. Some explorers (CPUs) will cover much more distance (get more work done, i.e. compute more predicted protein structures) than others per unit of time.

BOINC runs some benchmark on your PC, to know the FLOPS speed of each host.

Currently Rosetta's server doesn't differentiate between CPUs (ie. send the most demanding units to PCs to e.g. >1GB RAM), but will do so in the future.
Best UFO Resources
Wikipedia R@h
How-To: Join Distributed Computing projects that benefit humanity
ID: 17649 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Maxxou59

Send message
Joined: 5 May 06
Posts: 10
Credit: 84,743
RAC: 0
Message 17651 - Posted: 5 Jun 2006, 7:19:38 UTC

When you lanch Boinc, sometimes you can see thaht the software launch a "performance test" who calculate the performance of all your pc.

Its a benchmark, it calculates your floating point and this number is use to calculate your credit, the credit is calculate with this result and the time necessary for your pc to finish the workunit.

The total of the project in TFLops is just an addition of all the benchmark of theses PC but its a possible power, because when you use your ordi to play or to redige something, you use calcul for that.....

See right hand corner of the main page ( server status)

You have the number of host, if we will go to 150 TFlops, the number of host will be multipliate by 5 ...
Maxxou59-Lille-France
Student at University of Chemistry
ID: 17651 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
hugothehermit

Send message
Joined: 26 Sep 05
Posts: 238
Credit: 314,893
RAC: 0
Message 17664 - Posted: 5 Jun 2006, 9:48:05 UTC
Last modified: 5 Jun 2006, 9:54:23 UTC

G'day Paul Panks

Let's say, to use a plain example, that I've got a TRS-80, my neighbor has a Commodore 64 and Joe across the street has an Atari ST.

Does the relative perform "ceilings" of each individual computer "bog down" the overall computation speed of the project (in terms of T-Flops)?


I had to answer this, as it's been a while since I've seen these computers mentioned, and yes I played with the TRS-80 and C64, also a couple of others like the System 80, MicroBee etc...

If you could somehow make these computers do a workunit within the timeframe allowed then you would be adding to the T-flops, if they failed, you wouldn't drag anything down they just wouldn't contribute (the computers would spin there wheels so to speak, always over-running the workunit deadline).

Rosetta@Home (like all of the @Home applications) dosn't care about your computer as such, it cares about the results that your computer provides.

As Maxxou59 said Rosetta@Home would like about 5 times the computing power it has now, this may of course change as they improve the algorithm or find that they need even more computer power.




ID: 17664 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Feet1st
Avatar

Send message
Joined: 30 Dec 05
Posts: 1755
Credit: 4,690,520
RAC: 0
Message 17674 - Posted: 5 Jun 2006, 15:47:31 UTC - in response to Message 17647.  

...Am I correct in guessing that ~ 55,000 computers are <= 150 T-Flops? Or is it >= 150 T-Flops?

You are in the right ballpark. And you might see there are already more than 55,000 "hosts" and be confused why R@H doesn't already have 150 T-FLOPS. In short, the reason is that most PCs are not on 24hrs a day, and most BOINC participants are crunching for mulitple projects, and so R@H only gets a fraction of their compute time.

Add this signature to your EMail:
Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might!
https://boinc.bakerlab.org/rosetta/
ID: 17674 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 17715 - Posted: 5 Jun 2006, 23:19:00 UTC

Hugo stated:
If you could somehow make these computers do a workunit within the timeframe allowed then you would be adding to the T-flops, if they failed, you wouldn't drag anything down they just wouldn't contribute (the computers would spin there wheels so to speak, always over-running the workunit deadline).

My Apple II+ was from the same time period as the C64 and TRS-80. The max ram that I could install was 64k. It came with single sided floppy drives that could handle around 150k; and I don't remember seeing a hard drive larger than 10 megs ever offered for the Apple II+ line.

It'd be an amazing achievement if someone could get Rosetta to run on such a diminished footprint as that..
ID: 17715 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul Panks

Send message
Joined: 5 Jun 06
Posts: 2
Credit: 385
RAC: 0
Message 18031 - Posted: 7 Jun 2006, 23:53:47 UTC - in response to Message 17715.  
Last modified: 7 Jun 2006, 23:54:42 UTC

Hugo stated:
If you could somehow make these computers do a workunit within the timeframe allowed then you would be adding to the T-flops, if they failed, you wouldn't drag anything down they just wouldn't contribute (the computers would spin there wheels so to speak, always over-running the workunit deadline).

My Apple II+ was from the same time period as the C64 and TRS-80. The max ram that I could install was 64k. It came with single sided floppy drives that could handle around 150k; and I don't remember seeing a hard drive larger than 10 megs ever offered for the Apple II+ line.

It'd be an amazing achievement if someone could get Rosetta to run on such a diminished footprint as that..


To give everyone an idea of how diminished a footprint that is, consider that in 1982 64KB of RAM was expensive and CP/M (an operating system of the mid-1970s) required 64KB to run efficiently -- but that's still only 65,536 bytes.

A standard floppy disk (5.25") at that time held 360KB (IBM-formatted), 170KB (Commodore-formatted) and 90KB (Atari-formatted).

If the computers just used floppy disks and required "x" number of disk swaps to load data into memory, not even accounting for CPU processing time, it would literally take weeks just to complete a single work unit sent based solely on the amount of disk swaps involved.

2.3 MB is approximately 6 IBM-formatted floppies, 13 Commodore-formatted floppies, and 25 Atari-formatted floppies.

Assuming each CPU is essentially of 8-bit 6502-calibur design, the work time required to complete a single work unit would be astronomically long. A simple bubble sort consisting of 100 random numbers takes approximately 27 seconds to completely sort in Commodore BASIC and around 6 seconds in Machine Language (ML). A modern day computer can accomplish that in a few nanoseconds.

If that's the case, then the 2-3 hours it takes for some modern computers to complete a single work unit would require (27 seconds * Mhz), or about 3,375 days, on a Commodore 64. That works out to 9.5 years, or thereabouts.

That's my best guess, anyway.






ID: 18031 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile David Emigh
Avatar

Send message
Joined: 13 Mar 06
Posts: 158
Credit: 417,178
RAC: 0
Message 18034 - Posted: 8 Jun 2006, 0:39:00 UTC

So I guess the notion of putting my old Amiga computer (A500) to work for Rosetta is a forlorn hope ;)

{please note, this is NOT a serious post}
Rosie, Rosie, she's our gal,
If she can't do it, no one shall!
ID: 18034 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 18037 - Posted: 8 Jun 2006, 1:20:40 UTC

People have put working modern (small footprint motherboards with cool running cpus) computers inside whisky bottles - so it's possible to mod the case from your favorite first computer and run Rosetta on it. People who owned such systems will wonder why you have a non standard monitor and external keyboard/mouse. :)


ID: 18037 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Rosetta@home Science : Rosetta@home and emerging computer technologies



©2024 University of Washington
https://www.bakerlab.org