Message boards : Number crunching : Credit always low
Previous · 1 · 2 · 3 · 4 · Next
Author | Message |
---|---|
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,210,217 RAC: 1,368 |
For clarity, from the start: Claimed Credit This is based on the quick BOINC benchmark - 50% whetstone and 50% dhrystone, multiplied by the CPU time taken to process the Work Unit (WU). The benchmark is almost entirely a benchmark of the CPU core and doesn't take into account other factors such as cache size and memory speed etc. Granted Credit Each completed WU that your computer completes can contain many decoys (i.e. models). This is done to allow Rosetta to create work units that approximately run for a desired time - 8 hours by default I believe(?). So one completed 8 hour WU might contain anything from 1 to 50+ decoys. The granted credit is calculated as the average claimed credit from all previously submitted WUs of the same type, multiplied by the number of decoys. An example Take this WU: simpleF2_1f0s_2cx1_ProteinInterfaceDesign_15Apr2010_19616_136 My claimed credit was 47.97. This was based on a benchmark that calculated I should get 15.98 credits per CPU-hour, and it ran for 10,806s (3.00 hours). 15.98 x 3.00 = 47.97 credits claimed. Therefore, if my benchmark had been twice as high, my claimed credit would have been twice as high. My granted credit was 51.54. The submitted WU contained 484 decoys. From previous submissions of this task, the R@H servers knew the claimed credit was, on average, 0.1065 credits per decoy. My computer therefore received 0.1065 x 484 = 51.54 credits. So, in summary, if you computer gets a high or low benchmark compared to its R@H crunching ability, claimed will be high or low. Granted credit is based on the work completed so if your CPU has a large cache which doesn't help the benchmark but does help R@H crunching, you will probably find that claimed is lower than granted, and vice-versa. The result is this: PC 'A' has a small cache but a fast cpu core (e.g. 3GHz Dual-core AMD Athlon) This computer gets a very high benchmark but can't match that on Rosetta. On average it claims 50 credits per 8 hour task and is granted 40 credits. PC 'B' has a large cache but slower FPU performance (e.g. 2.8GHz Dual-core Intel Core2) This computer gets a lower benchmark but rosetta runs faster because of the additional cache. On average it claims 30 credits per 8 hour task and is granted 40 credits. One is claiming more than it receives and one is claiming less, but both are getting equal credit because their performance is equal on R@H. HTH Danny |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,210,217 RAC: 1,368 |
I did the above for my 920 and 3 other 920s: I don't have Excel on this computer to work out the average credit per CPU-hour, but if yours is lower could it be because they're using Vista (which I believe is more efficient at scheduling more cores, and you have lots), or because they're using x64? I'm not sure what difference this would make but it might be one of those, or possibly that Rosetta is getting a boost from turboboost on the stock machines which I presume is disabled on yours due to the overclock? |
Jochen Send message Joined: 6 Jun 06 Posts: 133 Credit: 3,847,433 RAC: 0 |
I wrote a little program in C++ for this. I do not trust Excel... ;) No, actually it took me less time to write a little parser than preparing the data for an Excel sheet. And of course this is different to the BOINC-RAC, since it does not take into consideration in what period of time these results were provided. With this credits per hour per core you should be able to calculate the RAC quite exact, if your computer runs 24/7. The only reason for divergences is whether or not the is load from other programs _while_ crunching. Jochen |
Jochen Send message Joined: 6 Jun 06 Posts: 133 Credit: 3,847,433 RAC: 0 |
I have now parsed some data from my other computer (Q9650). Credits per hour per core: 21,0304418341766 21,0304418341766 * 4 (cores) * 24 (hours) = 2018,9224160809536 RAC. This is pretty close to the real RAC of currently 2005: https://boinc.bakerlab.org/rosetta/show_host_detail.php?hostid=1096431 Jochen |
Sid Celery Send message Joined: 11 Feb 08 Posts: 2084 Credit: 40,623,376 RAC: 3,716 |
For clarity, from the start: ... Thanks for that. That's the neatest explanation I've seen. |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,210,217 RAC: 1,368 |
Oops - it should say: The granted credit is calculated as the average claimed credit per decoy from all previously submitted WUs of the same type, multiplied by the number of decoys. :D |
Jochen Send message Joined: 6 Jun 06 Posts: 133 Credit: 3,847,433 RAC: 0 |
So what you are trying to tell me, is that I got a fast machine, but Rosetta can not make any use of it?!? In this case, I rather quit crunching Rosetta on this machine... Jochen |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,210,217 RAC: 1,368 |
So what you are trying to tell me, is that I got a fast machine, but Rosetta can not make any use of it?!? In this case, I rather quit crunching Rosetta on this machine... No, the 920 is about as good as you can get on rosetta, and an overclocked one is even better. What i'm trying to tell you is that a machine that consistently gets less credits than claimed can still be getting more credits per WU than a machine that gets more credits than claimed. A difference between claimed and granted is irelavant because claimed is based on a benchmark that doesn't reflect rosetta very well and so can be artificially high or low. To put it another way, if we both ran exactly the same WU for 8hrs and I was using an old athlon we'd get the same credit for each decoy processed but yours would complete many more decoys in 8hrs than mine would so you'd get proportionally more credit for that WU. I'll have a look at your numbers this evening and get back to you though. |
mikey Send message Joined: 5 Jan 06 Posts: 1895 Credit: 8,929,208 RAC: 1,133 |
So what you are trying to tell me, is that I got a fast machine, but Rosetta can not make any use of it?!? In this case, I rather quit crunching Rosetta on this machine... Jochen has a 3 to 4 day cache so you will have to go a few pages to get to some completed units, but he is farily consistently doing units in the 1 to 2 hour range and only doing about 5 to 10 decoys per unit. At least he is returning units in that time frame, 3 to 4 days, and you need to go all the way to 240 workunits to find units that have been returned. |
Jochen Send message Joined: 6 Jun 06 Posts: 133 Credit: 3,847,433 RAC: 0 |
Jochen has a 3 to 4 day cache so you will have to go a few pages to get to some completed units, but he is farily consistently doing units in the 1 to 2 hour range and only doing about 5 to 10 decoys per unit. At least he is returning units in that time frame, 3 to 4 days, and you need to go all the way to 240 workunits to find units that have been returned. Basicly correct. It is a 3 day cache, but it might be messed up a bit, since my internet was 'broken' for 24 hours until 10 AM today (MESZ = GMT+2). Currently the first page with done results start at 220 (hit the Next-button once on the first result page and change the end of the URL from 'offset=20' to 'offset=220'). From my calculations, my 920 shuold reach a RAC of 3500. I was rather expecting a RAC close to 4000... Jochen |
Jochen Send message Joined: 6 Jun 06 Posts: 133 Credit: 3,847,433 RAC: 0 |
I have just processed some more data form my 920: Processd results: 177 Min. credits per hour: 13,7128212108908 Max. credits per hour: 26,3064721433467 Ave. credits per hour: 18,2945678916994 Estimated RAC: 3512,55703520629 How is it possible, that the credits per hour vary bei 50 percent? I was not running anything else but Rosetta on this computer. Is this a hyperthreading issue? Jochen |
Chilean Send message Joined: 16 Oct 05 Posts: 711 Credit: 26,694,507 RAC: 0 |
I have just processed some more data form my 920: I just realized your PC has 3GB of RAM... and is running 8 threads of JUST Rosetta... That's even below the 512/core recommendation. Not sure whether HT hinders or helps out Rosetta crunching. |
Jochen Send message Joined: 6 Jun 06 Posts: 133 Credit: 3,847,433 RAC: 0 |
I just realized your PC has 3GB of RAM... and is running 8 threads of JUST Rosetta... That's even below the 512/core recommendation. I am aware of this problem. Actually it has 6 GB of RAM, it is just the limiting 32-bit OS... As well I have been watching the memory consumption since I reinstalled BOINC last Saturday. Average usage is 300 MB per WU, with a peak here and there up to 320 MB. But usually there are still 400 MB of RAM left... Should not be much of a problem. I did not see WUs 'waiting for memory'. Jochen |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,210,217 RAC: 1,368 |
Not sure whether HT hinders or helps out Rosetta crunching. From previous posts it helps on i7 - not so sure on P4. |
Jochen Send message Joined: 6 Jun 06 Posts: 133 Credit: 3,847,433 RAC: 0 |
The granted credit is calculated as the average claimed credit per decoy from all previously submitted WUs of the same type, multiplied by the number of decoys. That is even worse than I expected. What is it good for granting credits by the average claimed credit for a decoy?!? There are so many different computers out there that this does not seem to be a fair method at all. As well using an optimized BOINC client will result in claiming far to much credits - as long as there is no optimized Rosetta client. I would run Rosetta even without any crediting system... But since there is one, it should be fair and reasonable. There sure is a way to count the actual FLOPS for a WU. In my mind this would be a better base for a crediting system. Jochen |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,210,217 RAC: 1,368 |
The granted credit is calculated as the average claimed credit per decoy from all previously submitted WUs of the same type, multiplied by the number of decoys. It's fair - I think you misunderstand? I'll give another example: If my slow computer finishes a decoy in 10 mins and your computer (which is 5x faster) completes the same decoy in 2 mins then we'll both receive the same amount of credit for that decoy (because they've done the same amount of work). In a given period, your computer will complete 5x as many decoys as my computer and so will get 5x the credit. Likewise, a 30% overclock should result in 30% quicker completion of each decoy, and so 30% more credit in a given period. The claimed credit has very little effect because the value of each decoy is calculated from the average credit claimed for all previous decoys of that type. Therefore if you have a massively high benchmark (and consequently an excessive claimed credit), it will have a very minor effect on the granted credit - it will only serve to slightly increase the average claim for that type of decoy. Danny |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Decoys are the finished product that the project needs. If one machine can produce one in an hour and another machine takes 90 minutes to produce one... they both produced the same amount of work and granting credit per decoy assures they get the same credit for the same result. All without needing to have a second person run each and every task just to confirm credit. Each decoy is different, that's the whole point. So it is not possible to say with certainty ahead of time what the actual computational requirement of a given decoy will be. Averaging the reported credit per decoy is the most appropriate system the team has come up with. It allows for variation both in the machines running the work, and in the work being run. Point of order, the recommendation is that machines running Rosetta should have a minimum of 512MB of memory. In general, that allows some margin for an operating system and one task to run. So when you extend to more then one core, the expectation would be something less then 512MB for each additional core (because any additional requirement by the operating system will be minimal). No specific per core recommendation is given, but Jochen's approach of reviewing usage of actual running tasks is a good one. Just keep in mind that the requirements do vary by type of task being performed. Rosetta Moderator: Mod.Sense |
Jochen Send message Joined: 6 Jun 06 Posts: 133 Credit: 3,847,433 RAC: 0 |
I see, why this approach was chosen. But looking at this WU, which was a very short running one, with lots of granted credits, it is just all about if you get the right WUs at the right time. This is the big problem I see in this approach. There are two things one has to do, to gain a certain influence on the granted credits (posting exactly what this is might be considered 'annoying', so I won't ;) ) But anyway, since I can not come up with a better idea, I accept it the way it is. BTT: As well I noticed, that my 920 now is getting granted approx. 10 more cedits per WU. I did not do any changes anything to the computer, though... Jochen |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Yes, the variability is the one major point the credit system doesn't address. It makes small sample sizes difficult to compare, but the system also assures that it all averages out over time. I guess I'm just saying that since the whole thing is based on averages of thousands of reports, your odds of being unlucky are equal to those of being lucky. Rosetta Moderator: Mod.Sense |
Jochen Send message Joined: 6 Jun 06 Posts: 133 Credit: 3,847,433 RAC: 0 |
Now this is a ridicilous result: https://boinc.bakerlab.org/rosetta/result.php?resultid=335073708 28 credits for 7 hours of calculation. I guess I will start aborting long running models again. This is just a waste of time... Jochen |
Message boards :
Number crunching :
Credit always low
©2024 University of Washington
https://www.bakerlab.org