Claimed credit vs grant credit

Message boards : Number crunching : Claimed credit vs grant credit

To post messages, you must log in.

AuthorMessage
Profile [CAMP] balint

Send message
Joined: 27 Oct 05
Posts: 2
Credit: 102,175
RAC: 0
Message 53036 - Posted: 13 May 2008, 16:39:48 UTC

Hi, I think I've got a problem with my rosetta credit. It gets me much fewer credit than expected and grant credit is always less than claimed. What could be the matter?
Here is a screen of my Tasks page:



Thanks for advice.
ID: 53036 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [CAMP] balint

Send message
Joined: 27 Oct 05
Posts: 2
Credit: 102,175
RAC: 0
Message 53082 - Posted: 15 May 2008, 23:23:29 UTC

I'm sorry for pushing up this thread, but can't anybody help me?
ID: 53082 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile David Emigh
Avatar

Send message
Joined: 13 Mar 06
Posts: 158
Credit: 417,178
RAC: 0
Message 53083 - Posted: 16 May 2008, 1:03:20 UTC
Last modified: 16 May 2008, 1:07:59 UTC

Claimed credit is based on the benchmarks your computer runs when you set up the BOINC client. The factors that affect it include IntegerOps/second, FloatingPointOps/second, and so forth.

For Rosetta, granted credit is based on the number of models/decoys that your computer actually builds during the time it is running the task, irrespective of the amount of time it takes to build each model.

The very first person to report a particular type of task gets exactly the same granted credit as claimed credit.

For each person who reports after the first, the granted credit is a running average of all of the claimed credits previously.

The following is my understanding of how the system works. I could be wrong. I humbly submit to correction by anyone better informed than myself.

Example:

The first person to report completes 10 models and claims 100 credits. They get exactly what they claimed. Those models are worth 10 credits each, regardless of how long it took to make them.

The second person to report completes 8 models and claims 170 credits. They get 80 credits, because the models were worth 10 credits each when they reported. However, the models will be worth a little more to the next person.

At this point, 18 models have been completed and 270 credits have been claimed. The models are now worth 15 credits each.

The third person to report completes 12 models and claims 90 credits. They get 180 credits, because models were worth 15 credits each when they reported. However, the models will be worth a little less to the next person.

At this point, 30 models have been completed and 360 credits have been claimed. The models are now worth 12 credits each.

And so on...


Again, I remind you that the above is my understanding of the process, and my understanding may be flawed.
Rosie, Rosie, she's our gal,
If she can't do it, no one shall!
ID: 53083 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mikey

Send message
Joined: 9 May 07
Posts: 5
Credit: 135,037
RAC: 0
Message 53085 - Posted: 16 May 2008, 2:01:57 UTC

Not sure if it will make you feel better, but my credit per unit is dropping like a stone is a similar manner to yours. This started May 1st.

I came to this board to see if others are having the same problem.
ID: 53085 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
AMD_is_logical

Send message
Joined: 20 Dec 05
Posts: 299
Credit: 31,460,681
RAC: 0
Message 53086 - Posted: 16 May 2008, 2:24:58 UTC

Some computers will go into a low-power low-frequency state when not in use. Depending on how a computer is set up, it may think it's not in use when only a low priority task (such as Rosetta) is running. Laptops are often set up this way. I've also heard that Ubuntu Linux is like this by default.

If a computer is running in low-power mode, it will crunch models very slowly and thus will get much less credit than if it were running full speed.
ID: 53086 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
fjpod

Send message
Joined: 9 Nov 07
Posts: 17
Credit: 2,201,029
RAC: 0
Message 53365 - Posted: 27 May 2008, 3:49:15 UTC - in response to Message 53083.  

Claimed credit is based on the benchmarks your computer runs when you set up the BOINC client. The factors that affect it include IntegerOps/second, FloatingPointOps/second, and so forth.

For Rosetta, granted credit is based on the number of models/decoys that your computer actually builds during the time it is running the task, irrespective of the amount of time it takes to build each model.

The very first person to report a particular type of task gets exactly the same granted credit as claimed credit.

For each person who reports after the first, the granted credit is a running average of all of the claimed credits previously.

The following is my understanding of how the system works. I could be wrong. I humbly submit to correction by anyone better informed than myself.

Example:

The first person to report completes 10 models and claims 100 credits. They get exactly what they claimed. Those models are worth 10 credits each, regardless of how long it took to make them.

The second person to report completes 8 models and claims 170 credits. They get 80 credits, because the models were worth 10 credits each when they reported. However, the models will be worth a little more to the next person.

At this point, 18 models have been completed and 270 credits have been claimed. The models are now worth 15 credits each.

The third person to report completes 12 models and claims 90 credits. They get 180 credits, because models were worth 15 credits each when they reported. However, the models will be worth a little less to the next person.

At this point, 30 models have been completed and 360 credits have been claimed. The models are now worth 12 credits each.

And so on...


Again, I remind you that the above is my understanding of the process, and my understanding may be flawed.

So if I understand this, having a really fast computer is not necessarily a help in acquiring credits. However, being the first one to complete a unit might be helpful. But if a bunch of slow computers have finished a workunit before you get to it, then you have no advantage if you have a fast one?
ID: 53365 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1829
Credit: 114,371,266
RAC: 53,072
Message 53376 - Posted: 27 May 2008, 11:10:59 UTC - in response to Message 53365.  
Last modified: 27 May 2008, 11:16:52 UTC

So if I understand this, having a really fast computer is not necessarily a help in acquiring credits. However, being the first one to complete a unit might be helpful. But if a bunch of slow computers have finished a workunit before you get to it, then you have no advantage if you have a fast one?

While David Emigh's description is correct, it is unlikely that the claimed credits will coninually fall as in the example, and therefore the granted credit doesn't tend to fall in turn. The credit is just as likely to rise at first, but either way will even out to the average of the claimed credit to date. Therefore faster computers, which produce more decoys (and claim more credit), get more credit.
ID: 53376 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 53384 - Posted: 27 May 2008, 14:11:18 UTC

A "really fast" computer will post higher benchmarks. It will run for the number of hours in your Rosetta preferences for the runtime preference (rounding the runtime to the nearest model). The only time "fast" vs "slow" would be a credit issue is when their benchmarks do not reflect the same degree of fastness as the actual Rosetta work being produced.

If one machine posts 1,000 floating point operations per second benchmarks and completes 10 models with 3 hours of runtime, a faster machine that posts 2,000 floating point operations per second would be "twice as fast". But if that second machine only completes 18 models of the identical type of work in the same 3 hours, it's not going to get quite twice the credit per hour average of the first machine. It will "claim" twice the credit, but not be granted that much.
Rosetta Moderator: Mod.Sense
ID: 53384 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
fjpod

Send message
Joined: 9 Nov 07
Posts: 17
Credit: 2,201,029
RAC: 0
Message 53396 - Posted: 27 May 2008, 18:06:03 UTC

I appreciate all the attempts to explain this, but I still don't get it. So, again, what you are saying is if I have a fast machine and someone else with a slow machine finishes a work unit first and gets, say 80 credits, if I go and finish that same workunit later in half the time, I will probably not get the 80 credits, but maybe 40 or 50. If I am correct, this doesn't seem fair. I'm not mad. I'm just trying to understand the beast.

zmsybe my problem is I don't really understand what a decoy is.
ID: 53396 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Ingleside

Send message
Joined: 25 Sep 05
Posts: 107
Credit: 1,514,472
RAC: 0
Message 53397 - Posted: 27 May 2008, 18:30:20 UTC - in response to Message 53396.  

I appreciate all the attempts to explain this, but I still don't get it. So, again, what you are saying is if I have a fast machine and someone else with a slow machine finishes a work unit first and gets, say 80 credits, if I go and finish that same workunit later in half the time, I will probably not get the 80 credits, but maybe 40 or 50. If I am correct, this doesn't seem fair. I'm not mad. I'm just trying to understand the beast.

zmsybe my problem is I don't really understand what a decoy is.

To make the calculation "simple":

1; "slow" computer uses 4 hours and manages to crunch 10 decoys in this time, and gets 80 credits. This is 8 credits/decoy, and 20 credits/hour.
2; "fast" computer uses 4 hours and manages to crunch 18 decoys in this time. For this, the "fast" computer gets:

8 credits/decoy * 18 decoys = 144 credits.

This also means, 36 credits/hour.


So, a little simplified, everyone gets 8 credit/decoy for this wu-type, but a fast computer can manage to generate more decoys in the same amount of time than a slow computer, and therefore gets more credit/hour.


In practice credit/decoy will variate somewhat as more and more results is returned, but this variation should be small.

"I make so many mistakes. But then just think of all the mistakes I don't make, although I might."
ID: 53397 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 53399 - Posted: 27 May 2008, 19:06:12 UTC
Last modified: 27 May 2008, 19:06:52 UTC

Decoys, also called models, are basically a new (and different) run on the protein that the task pertains to. The way Rosetta allows the user to define a runtime preference is they just keep running more decoys until the application can see from the amount of time each is taking that to do another would exceed the preference. It then marks the task as completed. You can see the number of decoys you completed when you view the details of the task on the website. You can also see it in the graphic for the rosetta applications (i.e. it is not shown yet in the "mini" rosetta application graphic).

Where people always get thrown off is they try to count tasks, or time rather then decoys. The actual science work done for the project is measured in decoys, and the credit granted are per decoy.

The other area people get confused is they look at credit per model on one task and compare to credit per model on another. But they forget the part about being the same type of protein and the same analysis method being used upon it. To be comparable, you basically have to have WU names that are identical except for the second to last block of digits. This is where tasks are sequentially numbered.

Credit claims are just a reflection of the machine's benchmarks, and the time spent on the task. But credit granted reflects the amount of work achieved, regardless of time spent or your benchmarks.

So, if ANY machine reports in the first completed task in the line, and is granted 80 credits for completing X decoys, then the next one to report, regardless of the relative speed, will get 80/X credits granted per decoy they report back. If the credit claim of the second report was 100 for the same X decoys produced, then the third machine to report would get (80+100)/2X credits per decoy.

So your claim effects the average that is building forward, but does not directly effect your credit granted.
Rosetta Moderator: Mod.Sense
ID: 53399 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Chilcotin

Send message
Joined: 5 Nov 05
Posts: 15
Credit: 16,969,500
RAC: 0
Message 53459 - Posted: 30 May 2008, 13:30:41 UTC - in response to Message 53399.  

I have been puzzling about this for awhile as well.

I tend to let my models run for awhile; usually about 12 hours. I reduced that a bit recently as some models were crashing if run too long.

On surface it would appear that all other things being equal a machine with shorter run times would have an advantage over a machine with longer run times as its models, being returned first, would not be discounted.

Recognizing that "credit discussions" are always fraught with peril I emphasize that I am not complaining ... just trying to understand the process.
ID: 53459 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1829
Credit: 114,371,266
RAC: 53,072
Message 53460 - Posted: 30 May 2008, 13:43:02 UTC - in response to Message 53459.  

On surface it would appear that all other things being equal a machine with shorter run times would have an advantage over a machine with longer run times as its models, being returned first, would not be discounted.


There's no reason why a task returned early would get more credit than a task that (contains the same number of decoys) is returned after.

ID: 53460 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 53463 - Posted: 30 May 2008, 16:31:37 UTC - in response to Message 53459.  

On surface it would appear that all other things being equal a machine with shorter run times would have an advantage over a machine with longer run times as its models, being returned first, would not be discounted.


Not certain what "discounted" credit you refer to. In order to give an informative illustraition one machine has to do something different then the other, and sometimes I show that as being less... but it being more is just as likely. After the first handful of reports, an average is very firmly established, and not influenced greatly one direction or the other by the rest of the reports. Indeed, in my most recent example previously in this thread, the third one to report will receive slightly more credit per model then the first one to report received. This was due to how the benchmarks of the second one to report compared to their ability to complete models.

The only advantage that I can see that the very first to report in would have, is that I believe their claimed credit is also granted to them. So if they claimed an outrageous amount of credit, they would get it. But the people following that initial report would ALSO have that outrageous figure factored in to the average they received as well. Thus removing any incentive to attempt to falsify a credit claim. Also, there's no good way to bias your odds to be the very first to report other then a short runtime. It's statistically very unlikely you will be the first, because you probably weren't the first to be issued a task from that batch either.

The only advantage I can think of (assuming long tasks complete normally) to doing a shorter runtime preference, and reporting back results right away, is that you get the credit issued sooner. In other words, if I start at zero, and do two days of work and get 250 credits... on day 1, if I've not reported back anything yet, I'm showing zero credits so far. You have the work completed, but it has not been reported back yet. So, the overall credit issued is the same (after 2 days), and the resulting RAC will be identical (at the end of the 2 days), but for a very short window of time, your total credit and any resulting RAC change would reflect slightly more work having been completed. For people that fixate on the fractions of points in their RAC, this is the sort of thing they do to bump it. But it is sort of self-defeating, because you have to KEEP doing it in order to maintain the number, and if you would have just waited and let it run normally, it would quickly maintain the same number on it's own.

It's a lot like buying a bunch of stuff you need anyway for your business just before the end of your fiscal year so you can deduct it from your taxes. You get to deduct it regardless of when it was purchased. But if you buy it now, you get to deduct it sooner and reflect it in an earlier fiscal quarter then if you had just waited and bought it when it was actually required later on. With taxes, there is the alternative of reporting more income in the current period, and the time value of money. With credits, these other influences are not present.
Rosetta Moderator: Mod.Sense
ID: 53463 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Chilcotin

Send message
Joined: 5 Nov 05
Posts: 15
Credit: 16,969,500
RAC: 0
Message 53473 - Posted: 31 May 2008, 0:31:50 UTC - in response to Message 53465.  

I think he's referring to the fact that tasks which run longer have a greater chance to crash, thus not returning any credit at all.


No but thanks for the credit (no pun intended). I had understood from the discussion that later submissions received less credit than earlier submissions for a particular model.

Let it ride .. it isn't something that I am worrying about. I was just puzzled by the credit strategy. I will be participating regardless.

ID: 53473 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1829
Credit: 114,371,266
RAC: 53,072
Message 53489 - Posted: 31 May 2008, 20:56:46 UTC
Last modified: 31 May 2008, 20:57:30 UTC

here's the thread i was thinking of - in particular DEK's post halfway down:

https://boinc.bakerlab.org/rosetta/forum_thread.php?id=3783&nowrap=true#49272
ID: 53489 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 53512 - Posted: 1 Jun 2008, 20:38:40 UTC

Oh thanks dcdc, I was thinking I needed to dig that one up as well. This is the graph of how credit per model changed over time of the various client machines reporting in for one specific protein and batch of work. As you can see it does fluctuate, but quickly stabilizes.
Rosetta Moderator: Mod.Sense
ID: 53512 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Nothing But Idle Time

Send message
Joined: 28 Sep 05
Posts: 209
Credit: 139,545
RAC: 0
Message 53571 - Posted: 6 Jun 2008, 12:24:47 UTC

As a retired programmer I contemplate this... a credit calculation scheme was developed and implemented; we take it on faith that it functions correctly, but does it? Rosetta is not known for its flawless software. Right now everyone just assumes that credits are being calculated as proclaimed.

Was the scheme tested on Ralph and verified to work as intended?
Could the calculation algorithm become corrupted over time and need re-verification?

Have we learned anything from its historical use? Perhaps the scheme needs a review or a modernizing tweak and upgrade. Rosetta IS one of the lower awarding projects which hinders its appeal for some prospective participants who either choose not to join, choose to keep a low resource share, or use Rosetta only for backup.
ID: 53571 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : Claimed credit vs grant credit



©2024 University of Washington
https://www.bakerlab.org