Subpar credit per CPU second for R@H. Why?

Message boards : Number crunching : Subpar credit per CPU second for R@H. Why?

To post messages, you must log in.

AuthorMessage
student_

Send message
Joined: 24 Sep 05
Posts: 34
Credit: 4,350,469
RAC: 4,920
Message 54854 - Posted: 2 Aug 2008, 22:30:14 UTC

According to the BOINCstats project credit comparison, Rosetta@home grants significantly less credit than most other projects for the same amount CPU time. Comparing only the ratio in hosts that work on both projects (e.g. a host the works on Rosetta@home and Einstein@home), the percentage is about 70% compared to Einstein@home, 72% compared to Docking@home, 80% compared to SETI@home, and in general less than average.

Optimizing the efficiency of Rosetta@home may not be entirely necessary to increase the network's performance, considering it's almost twice as powerful as it was at the beginning of CASP 7 (37 teraFLOPS then, 70 teraFLOPS now). Since the number of hosts increased less than a third in that two-year time (65,000 then, 85,000 now), Rosetta@home will probably depend on Moore's law more than mass appeal for its growth in CPU power.

Is the below average performance basically due to a lack of an optimized application, utilization of 64-bit clients, etc.? In particular with Docking@home, is that project's increased FLOPS/(CPU time) possibly due to it using the CHARMM molecular dynamics package?
ID: 54854 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1829
Credit: 117,028,544
RAC: 81,741
Message 54867 - Posted: 3 Aug 2008, 12:43:39 UTC - in response to Message 54854.  
Last modified: 3 Aug 2008, 12:44:11 UTC

According to the BOINCstats project credit comparison, Rosetta@home grants significantly less credit than most other projects for the same amount CPU time. Comparing only the ratio in hosts that work on both projects (e.g. a host the works on Rosetta@home and Einstein@home), the percentage is about 70% compared to Einstein@home, 72% compared to Docking@home, 80% compared to SETI@home, and in general less than average.

Optimizing the efficiency of Rosetta@home may not be entirely necessary to increase the network's performance, considering it's almost twice as powerful as it was at the beginning of CASP 7 (37 teraFLOPS then, 70 teraFLOPS now). Since the number of hosts increased less than a third in that two-year time (65,000 then, 85,000 now), Rosetta@home will probably depend on Moore's law more than mass appeal for its growth in CPU power.

Is the below average performance basically due to a lack of an optimized application, utilization of 64-bit clients, etc.? In particular with Docking@home, is that project's increased FLOPS/(CPU time) possibly due to it using the CHARMM molecular dynamics package?

Rosetta's efficiency has no effect - if you doubled the speed at which Rosetta crunches it wouldn't change the credit assigned, as the basis for the credit assignment would just expect twice as much work for each credit.

To change the granted credit, there needs to be a multiplier added to the initial calculation which assigns the decoy-value for each work-unit. I believe that is done on the in-house machines before the jobs are released. The same multiplier would have to be added to the BOINC claimed credit too, otherwise the average claims would drag the decoy's value down, or it could be added to the backerlab's credit-granting calculation, in which case it would have to multiply all of the claimed credit values by the multiplier...
ID: 54867 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
student_

Send message
Joined: 24 Sep 05
Posts: 34
Credit: 4,350,469
RAC: 4,920
Message 54880 - Posted: 3 Aug 2008, 19:41:57 UTC - in response to Message 54867.  

Rosetta's efficiency has no effect - if you doubled the speed at which Rosetta crunches it wouldn't change the credit assigned, as the basis for the credit assignment would just expect twice as much work for each credit.

To change the granted credit, there needs to be a multiplier added to the initial calculation which assigns the decoy-value for each work-unit. I believe that is done on the in-house machines before the jobs are released. The same multiplier would have to be added to the BOINC claimed credit too, otherwise the average claims would drag the decoy's value down, or it could be added to the backerlab's credit-granting calculation, in which case it would have to multiply all of the claimed credit values by the multiplier...


Thanks for the clarification. I was going on the assumption that credits approximated FLOPS per the relationship used on the main page (daily credit/100,000 = estimated teraFLOPS), which doesn't seem to reflect the actual situation.

How does the multiplier work? Maybe it actually does try to estimate the floating point operations done to produce one decoy for each workunit, or what?
ID: 54880 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1829
Credit: 117,028,544
RAC: 81,741
Message 54888 - Posted: 3 Aug 2008, 21:12:16 UTC - in response to Message 54880.  

Thanks for the clarification. I was going on the assumption that credits approximated FLOPS per the relationship used on the main page (daily credit/100,000 = estimated teraFLOPS), which doesn't seem to reflect the actual situation.

How does the multiplier work? Maybe it actually does try to estimate the floating point operations done to produce one decoy for each workunit, or what?

I'm assuming the first reported tasks will be from the bakerlab's in-house clusters, but that might not be true. Maybe the test runs aren't included in the results pool. Either way, the first task reported is the one that sets the initial credit granted- i.e. it gets what it claims. Later submissions get the average of all previous claimed credits (per decoy). I believe the idea was to make one BOINC cobblestone (not sure how that's determined...) equal one credit on R@H, so the multiplier must just be multiplied by the BOINC benchmark score. In that case it would just be a case of increasing that value.

Danny
ID: 54888 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : Subpar credit per CPU second for R@H. Why?



©2024 University of Washington
https://www.bakerlab.org