Big WUs, tiny credit

Message boards : Number crunching : Big WUs, tiny credit

To post messages, you must log in.

AuthorMessage
Hank Barta

Send message
Joined: 6 Feb 11
Posts: 14
Credit: 3,943,460
RAC: 0
Message 70391 - Posted: 25 May 2011, 13:23:14 UTC

I have recently received a number of WUs that take about 7 hours and the resulting credit granted is about 1/10 of the claimed credit. What is going on here? Is there a problem with my system? These seem to comprise about 10% of the work load on this host and do not appear on any of my other hosts (which are all of lesser CPU horsepower.)

Here are a couple recent samples:
https://boinc.bakerlab.org/rosetta/workunit.php?wuid=387381739
https://boinc.bakerlab.org/rosetta/workunit.php?wuid=387381738


Ordinarily this host is granted about 2/3 of claimed credit but that also seems to be down to about half. :(
ID: 70391 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 70393 - Posted: 25 May 2011, 16:46:24 UTC
Last modified: 25 May 2011, 16:49:35 UTC

Right, credit claims as based strictly on time taken and your machine's benchmarks. The granted credit reflects the effort required by the average machine to complete each model. As new protocols are developed, it is not entirely uncommon to see large variation in the per-model runtimes. This is unfortunate so far as the credit system, but critical in terms of developing the new protocols. Over time they always seem to find ways to make enhancements that make the runtimes more consistent.

And so yes, your machine appears to have run several models, and then encountered one that took exceptionally long to complete. And so your average time per model becomes really high, and task credit becomes low (but your per model credit granted is the same average everyone else is getting). The good news however is that as you complete other similar tasks and do not encounter a long-running model, your average per model looks better then tasks where a long-runner was encountered, and so you are actually granted credit on the high side. This is why we always explain that "it does average out" over time, it's just that the plus side is often say 5% (often hard to even notice) across 20 tasks, and the down side is often 90% on another (hard to miss).

Such discussions seem to always now morph in to discussion of the potential merits of altering your target runtime. All of the same rules and probabilities exist, so a longer runtime per task tends to improve your odds of encountering a long-runner on any given task, but you will complete less tasks per day. Your per model odds are identical. And so as long as the resulting difficulty BOINC has in knowing how long tasks will take to complete is not a major issue (i.e. so long as you have frequent internet access) it really doesn't change things. If the scheduler is causing you problems, then selecting a longer target runtime will tend to yield more consistent completion times, because a 3 or 4 hour overage on a task is a smaller percentage of a 12 or 24 hour runtime target, then it is of a 3 hour target. The "watchdog" assures tasks taking any longer then that are cleaned up automatically.
Rosetta Moderator: Mod.Sense
ID: 70393 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : Big WUs, tiny credit



©2024 University of Washington
https://www.bakerlab.org