Message boards : Number crunching : Claimed credit vs grant credit
Author | Message |
---|---|
[CAMP] balint Send message Joined: 27 Oct 05 Posts: 2 Credit: 102,175 RAC: 0 |
|
[CAMP] balint Send message Joined: 27 Oct 05 Posts: 2 Credit: 102,175 RAC: 0 |
I'm sorry for pushing up this thread, but can't anybody help me? |
David Emigh Send message Joined: 13 Mar 06 Posts: 158 Credit: 417,178 RAC: 0 |
Claimed credit is based on the benchmarks your computer runs when you set up the BOINC client. The factors that affect it include IntegerOps/second, FloatingPointOps/second, and so forth. For Rosetta, granted credit is based on the number of models/decoys that your computer actually builds during the time it is running the task, irrespective of the amount of time it takes to build each model. The very first person to report a particular type of task gets exactly the same granted credit as claimed credit. For each person who reports after the first, the granted credit is a running average of all of the claimed credits previously. The following is my understanding of how the system works. I could be wrong. I humbly submit to correction by anyone better informed than myself. Example: The first person to report completes 10 models and claims 100 credits. They get exactly what they claimed. Those models are worth 10 credits each, regardless of how long it took to make them. The second person to report completes 8 models and claims 170 credits. They get 80 credits, because the models were worth 10 credits each when they reported. However, the models will be worth a little more to the next person. At this point, 18 models have been completed and 270 credits have been claimed. The models are now worth 15 credits each. The third person to report completes 12 models and claims 90 credits. They get 180 credits, because models were worth 15 credits each when they reported. However, the models will be worth a little less to the next person. At this point, 30 models have been completed and 360 credits have been claimed. The models are now worth 12 credits each. And so on... Again, I remind you that the above is my understanding of the process, and my understanding may be flawed. Rosie, Rosie, she's our gal, If she can't do it, no one shall! |
Mikey Send message Joined: 9 May 07 Posts: 5 Credit: 135,037 RAC: 0 |
Not sure if it will make you feel better, but my credit per unit is dropping like a stone is a similar manner to yours. This started May 1st. I came to this board to see if others are having the same problem. |
AMD_is_logical Send message Joined: 20 Dec 05 Posts: 299 Credit: 31,460,681 RAC: 0 |
Some computers will go into a low-power low-frequency state when not in use. Depending on how a computer is set up, it may think it's not in use when only a low priority task (such as Rosetta) is running. Laptops are often set up this way. I've also heard that Ubuntu Linux is like this by default. If a computer is running in low-power mode, it will crunch models very slowly and thus will get much less credit than if it were running full speed. |
fjpod Send message Joined: 9 Nov 07 Posts: 17 Credit: 2,201,029 RAC: 0 |
Claimed credit is based on the benchmarks your computer runs when you set up the BOINC client. The factors that affect it include IntegerOps/second, FloatingPointOps/second, and so forth. So if I understand this, having a really fast computer is not necessarily a help in acquiring credits. However, being the first one to complete a unit might be helpful. But if a bunch of slow computers have finished a workunit before you get to it, then you have no advantage if you have a fast one? |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,449,594 RAC: 10,776 |
So if I understand this, having a really fast computer is not necessarily a help in acquiring credits. However, being the first one to complete a unit might be helpful. But if a bunch of slow computers have finished a workunit before you get to it, then you have no advantage if you have a fast one? While David Emigh's description is correct, it is unlikely that the claimed credits will coninually fall as in the example, and therefore the granted credit doesn't tend to fall in turn. The credit is just as likely to rise at first, but either way will even out to the average of the claimed credit to date. Therefore faster computers, which produce more decoys (and claim more credit), get more credit. |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
A "really fast" computer will post higher benchmarks. It will run for the number of hours in your Rosetta preferences for the runtime preference (rounding the runtime to the nearest model). The only time "fast" vs "slow" would be a credit issue is when their benchmarks do not reflect the same degree of fastness as the actual Rosetta work being produced. If one machine posts 1,000 floating point operations per second benchmarks and completes 10 models with 3 hours of runtime, a faster machine that posts 2,000 floating point operations per second would be "twice as fast". But if that second machine only completes 18 models of the identical type of work in the same 3 hours, it's not going to get quite twice the credit per hour average of the first machine. It will "claim" twice the credit, but not be granted that much. Rosetta Moderator: Mod.Sense |
fjpod Send message Joined: 9 Nov 07 Posts: 17 Credit: 2,201,029 RAC: 0 |
I appreciate all the attempts to explain this, but I still don't get it. So, again, what you are saying is if I have a fast machine and someone else with a slow machine finishes a work unit first and gets, say 80 credits, if I go and finish that same workunit later in half the time, I will probably not get the 80 credits, but maybe 40 or 50. If I am correct, this doesn't seem fair. I'm not mad. I'm just trying to understand the beast. zmsybe my problem is I don't really understand what a decoy is. |
Ingleside Send message Joined: 25 Sep 05 Posts: 107 Credit: 1,514,472 RAC: 0 |
I appreciate all the attempts to explain this, but I still don't get it. So, again, what you are saying is if I have a fast machine and someone else with a slow machine finishes a work unit first and gets, say 80 credits, if I go and finish that same workunit later in half the time, I will probably not get the 80 credits, but maybe 40 or 50. If I am correct, this doesn't seem fair. I'm not mad. I'm just trying to understand the beast. To make the calculation "simple": 1; "slow" computer uses 4 hours and manages to crunch 10 decoys in this time, and gets 80 credits. This is 8 credits/decoy, and 20 credits/hour. 2; "fast" computer uses 4 hours and manages to crunch 18 decoys in this time. For this, the "fast" computer gets: 8 credits/decoy * 18 decoys = 144 credits. This also means, 36 credits/hour. So, a little simplified, everyone gets 8 credit/decoy for this wu-type, but a fast computer can manage to generate more decoys in the same amount of time than a slow computer, and therefore gets more credit/hour. In practice credit/decoy will variate somewhat as more and more results is returned, but this variation should be small. "I make so many mistakes. But then just think of all the mistakes I don't make, although I might." |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Decoys, also called models, are basically a new (and different) run on the protein that the task pertains to. The way Rosetta allows the user to define a runtime preference is they just keep running more decoys until the application can see from the amount of time each is taking that to do another would exceed the preference. It then marks the task as completed. You can see the number of decoys you completed when you view the details of the task on the website. You can also see it in the graphic for the rosetta applications (i.e. it is not shown yet in the "mini" rosetta application graphic). Where people always get thrown off is they try to count tasks, or time rather then decoys. The actual science work done for the project is measured in decoys, and the credit granted are per decoy. The other area people get confused is they look at credit per model on one task and compare to credit per model on another. But they forget the part about being the same type of protein and the same analysis method being used upon it. To be comparable, you basically have to have WU names that are identical except for the second to last block of digits. This is where tasks are sequentially numbered. Credit claims are just a reflection of the machine's benchmarks, and the time spent on the task. But credit granted reflects the amount of work achieved, regardless of time spent or your benchmarks. So, if ANY machine reports in the first completed task in the line, and is granted 80 credits for completing X decoys, then the next one to report, regardless of the relative speed, will get 80/X credits granted per decoy they report back. If the credit claim of the second report was 100 for the same X decoys produced, then the third machine to report would get (80+100)/2X credits per decoy. So your claim effects the average that is building forward, but does not directly effect your credit granted. Rosetta Moderator: Mod.Sense |
Chilcotin Send message Joined: 5 Nov 05 Posts: 15 Credit: 16,969,500 RAC: 0 |
I have been puzzling about this for awhile as well. I tend to let my models run for awhile; usually about 12 hours. I reduced that a bit recently as some models were crashing if run too long. On surface it would appear that all other things being equal a machine with shorter run times would have an advantage over a machine with longer run times as its models, being returned first, would not be discounted. Recognizing that "credit discussions" are always fraught with peril I emphasize that I am not complaining ... just trying to understand the process. |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,449,594 RAC: 10,776 |
On surface it would appear that all other things being equal a machine with shorter run times would have an advantage over a machine with longer run times as its models, being returned first, would not be discounted. There's no reason why a task returned early would get more credit than a task that (contains the same number of decoys) is returned after. |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
On surface it would appear that all other things being equal a machine with shorter run times would have an advantage over a machine with longer run times as its models, being returned first, would not be discounted. Not certain what "discounted" credit you refer to. In order to give an informative illustraition one machine has to do something different then the other, and sometimes I show that as being less... but it being more is just as likely. After the first handful of reports, an average is very firmly established, and not influenced greatly one direction or the other by the rest of the reports. Indeed, in my most recent example previously in this thread, the third one to report will receive slightly more credit per model then the first one to report received. This was due to how the benchmarks of the second one to report compared to their ability to complete models. The only advantage that I can see that the very first to report in would have, is that I believe their claimed credit is also granted to them. So if they claimed an outrageous amount of credit, they would get it. But the people following that initial report would ALSO have that outrageous figure factored in to the average they received as well. Thus removing any incentive to attempt to falsify a credit claim. Also, there's no good way to bias your odds to be the very first to report other then a short runtime. It's statistically very unlikely you will be the first, because you probably weren't the first to be issued a task from that batch either. The only advantage I can think of (assuming long tasks complete normally) to doing a shorter runtime preference, and reporting back results right away, is that you get the credit issued sooner. In other words, if I start at zero, and do two days of work and get 250 credits... on day 1, if I've not reported back anything yet, I'm showing zero credits so far. You have the work completed, but it has not been reported back yet. So, the overall credit issued is the same (after 2 days), and the resulting RAC will be identical (at the end of the 2 days), but for a very short window of time, your total credit and any resulting RAC change would reflect slightly more work having been completed. For people that fixate on the fractions of points in their RAC, this is the sort of thing they do to bump it. But it is sort of self-defeating, because you have to KEEP doing it in order to maintain the number, and if you would have just waited and let it run normally, it would quickly maintain the same number on it's own. It's a lot like buying a bunch of stuff you need anyway for your business just before the end of your fiscal year so you can deduct it from your taxes. You get to deduct it regardless of when it was purchased. But if you buy it now, you get to deduct it sooner and reflect it in an earlier fiscal quarter then if you had just waited and bought it when it was actually required later on. With taxes, there is the alternative of reporting more income in the current period, and the time value of money. With credits, these other influences are not present. Rosetta Moderator: Mod.Sense |
Chilcotin Send message Joined: 5 Nov 05 Posts: 15 Credit: 16,969,500 RAC: 0 |
I think he's referring to the fact that tasks which run longer have a greater chance to crash, thus not returning any credit at all. No but thanks for the credit (no pun intended). I had understood from the discussion that later submissions received less credit than earlier submissions for a particular model. Let it ride .. it isn't something that I am worrying about. I was just puzzled by the credit strategy. I will be participating regardless. |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,449,594 RAC: 10,776 |
here's the thread i was thinking of - in particular DEK's post halfway down: https://boinc.bakerlab.org/rosetta/forum_thread.php?id=3783&nowrap=true#49272 |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Oh thanks dcdc, I was thinking I needed to dig that one up as well. This is the graph of how credit per model changed over time of the various client machines reporting in for one specific protein and batch of work. As you can see it does fluctuate, but quickly stabilizes. Rosetta Moderator: Mod.Sense |
Nothing But Idle Time Send message Joined: 28 Sep 05 Posts: 209 Credit: 139,545 RAC: 0 |
As a retired programmer I contemplate this... a credit calculation scheme was developed and implemented; we take it on faith that it functions correctly, but does it? Rosetta is not known for its flawless software. Right now everyone just assumes that credits are being calculated as proclaimed. Was the scheme tested on Ralph and verified to work as intended? Could the calculation algorithm become corrupted over time and need re-verification? Have we learned anything from its historical use? Perhaps the scheme needs a review or a modernizing tweak and upgrade. Rosetta IS one of the lower awarding projects which hinders its appeal for some prospective participants who either choose not to join, choose to keep a low resource share, or use Rosetta only for backup. |
Message boards :
Number crunching :
Claimed credit vs grant credit
©2024 University of Washington
https://www.bakerlab.org