Joined: 26 Nov 14
Can anyone explain why my results statistics don\'t match my preferences for workload?
I currently run Rosetta, SETI, and Einstein.
I have Rosetta as the highest percentage, but it gets the lowest average results.
I really bumped it up a couple weeks ago, and still seeing the same thing.
For specifics, current settings are:
Rosetta = 80%
SETI = 15%
Einstein = 5%
But the work results are almost opposite.
Recent average daily work units:
Rosetta = 5,300
SETI = 10,900
Einstein = 29,100
If any additional info, screenshots, or logs would helpful, just let me know.
Joined: 22 Aug 06
There are several issues at play. I\'ll touch upon each and ask that you post more questions on those of most interest to you.
The RAC for each project is an average over the past 14 days of reported results. So if you have changed your resource shares less than 14 days ago, the new work mix will not be fully reflected in your RAC.
The credits granted for each project are unique and best compared against credits for the same project. So if you are holding US dollars, it is best compared to an item priced in US dollars. Euros are not the same. Just as SETI credits are not the same as Einstein credits. Many say that R@h credits are worth more than other projects, because they typically require a bit more crunching to earn each.
The resource shares are guidelines for the BOINC Manager to try and follow. But it will dynamically adjust what work it actually requests next based upon things such as whether or not you currently have any tasks running high priority as they approach there deadlines. Or if a project does not send new work units when they are requested, it may request more work from another project until more become available from the first. So there are variations throughout time as to the exact compute resource used by each project. The BOINC Manager has to round up to the nearest work unit when requesting work, does not always report back completed results immediately, and depending upon where you look at your RAC, it may be based upon project stats data that is often only refreshed daily. I often describe it to people by saying you should look for your resource shares to be enforced over the course of 100 hours, not 100 minutes.
There are also different resource needs from different projects and different work units. Some need more memory to run than others. Some run best on CPUs with large L2 caches. Some checkpoint more frequently than others, and so perhaps are losing work when machines are rebooted or turned off.
Rosetta Moderator: Mod.Sense
©2019 University of Washington