Unexpected difference in reward points

Questions and Answers : Windows : Unexpected difference in reward points

To post messages, you must log in.

AuthorMessage
GennadyK (deprecated)

Send message
Joined: 9 Oct 18
Posts: 3
Credit: 91,792
RAC: 0
Message 89761 - Posted: 24 Oct 2018, 13:11:00 UTC

Hello all,

I am using two computers for Rosetta@Home computations:
1. Intel Core i5 6500 Skylake with 4 threads, each processes 1 WU in 8 hours giving around 500 points per WU
2. Intel Core i7 2720 QM with 8 threads, each processed 1 WU in 8 hours giving around 150 points per WU

The second computer supports Hyper-Threading, so giving more threads with still 4 cores. I could understand if processing time would be 16 hours, however it remains 8 hours, while reward points are more than 2 times lower. This does not happen while computing for World Community Grid: computer #1 needs 3 h per WU and gets 150 points, while computer #2 needs 8 hours per WU and also getting 150 points.

Could anyone explain why the computer #2 is so less efficient for Rosetta@Home? Is Rosetta just gives any CPU 8 hours and then cutts off and what's done is done?

Cheers
ID: 89761 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 89809 - Posted: 31 Oct 2018, 4:20:55 UTC

Your 2720 seems to have 8 cores and only <4GB of memory. By hyperthreading, you are essentially doubling the demand on the already constrained memory resource on that machine.

The way R@h handles runtime is to align to your R@h setting for target runtime. Credits are awarded by the number of models completed. So, the number of models a given task will attempt to complete is reduced (or increased) to match your target runtime. So, you should really assess credit per CPU second (not "wall-clock" time), rather than per work unit.
Rosetta Moderator: Mod.Sense
ID: 89809 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile GennadyK

Send message
Joined: 14 Oct 06
Posts: 4
Credit: 3,059,534
RAC: 0
Message 89813 - Posted: 1 Nov 2018, 10:17:51 UTC - in response to Message 89809.  

Thanks for the explanations.

For instance, a completed task with the typical reward points (150) for i7 2720:
  https://boinc.bakerlab.org/result.php?resultid=1037983853
This process generated 45 decoys from 45 attempts

Here is a typical for i5 6500 (450 points):
  https://boinc.bakerlab.org/result.php?resultid=1038032459
This process generated 128 decoys from 128 attempts

The last one is for the weakest Android device - typical 25 points:
  https://boinc.bakerlab.org/result.php?resultid=1037488690
This process generated 9 decoys from 9 attempts

Is this information indeed about the attempted models you were talking about?
Can i7 2720 run more efficiently (with more points per task) with 4 cores or it does not matter as Rosetta will adjust the WU complexity anyway, therefore the efficiency always remains similar?
ID: 89813 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 89884 - Posted: 12 Nov 2018, 20:48:58 UTC

Yes, those "decoys" (or models) is what I was talking about.

R@h complexity does not change, just the length of time the task runs (trying to align with your runtime preference). The longer it runs, the more decoys it will complete, and the more credit it will be granted.

Any machine that is memory constrained is difficult to predict how it might perform with more memory or fewer active tasks. I would suggest you look at memory faulting rates. If faulting rates are presently very high, then you might actually get more net work completed by running fewer active tasks. In general, hyperthreading does not yield much performance difference for an entirely CPU-bound work load (such as R@h).
Rosetta Moderator: Mod.Sense
ID: 89884 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Questions and Answers : Windows : Unexpected difference in reward points



©2024 University of Washington
https://www.bakerlab.org