Message boards : Number crunching : RAC dropping
Previous · 1 · 2 · 3 · 4
Author | Message |
---|---|
Mats Petersson Send message Joined: 29 Sep 05 Posts: 225 Credit: 951,788 RAC: 0 |
I agree with mmciastro. To achieve two points for 24 hours of work by averaging the claimed credit per decoy, the previous claimed credit per decoy must be terribly low... Almost zero, I would think. An athlon64 4000+ should achieve around 15 credits per hour per core, so a single workunit running for 24 hours should get 360 or so credits. To average that down to 2, the other value needs to be 2/360, which is a VERY low credit.... The only thing I can think of is that something has gone horribly wrong with the credit claiming on this WU due to some math error (like an overflow/underflow situation)... -- Mats |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,821,902 RAC: 13,431 |
That is probably because you were close to first in with that workunit, with an overclaiming client first in (which would have been awarded what it claimed). My team mate was awarded 2 points for a valid 24 hour workunit (on an A64 4000+ if I remember correctly), Which does'nt seem to make any sense at all, even if you take averages into account. That happened on at least one WU when the new credit system was introduced, but AFAIK the credits were all recalculated so it should have been updated with a higher figure. Whl, I'm not sure what you mean when you say a high end PC is dragged down by a low end one? If a fast and slow PC both run the same decoy they will get the same credit for it. The faster one will do it in less time though, and so will run more decoys, therefore earning more credit. How do you mean a high end PC will be dragged down? Do you mean that because in the calculation for the credit per decoy, an average is taken? If so, the average is the average credit claimed, and not the average benchmark returned. benchmark * time = credit If the benchmarks were averaged for the rolling decoy credit calculation then high end pcs would be dragged down and low end ones dragged up (infact low end PCs would get more credit per decoy because the time taken is higher on low end PCs), but that doens't happen. The benchmarks aren't averaged, only the product of the benchmark and the time, which is of course, credit. cheers, Danny |
Feet1st Send message Joined: 30 Dec 05 Posts: 1755 Credit: 4,690,520 RAC: 0 |
With the new credit system, BOINC's flawed benchmarks are still used in the sense that they ARE the claimed credit. So the first WU to report in gets credit based directly upon the credit claim. But as for dragging up and down, the only way to drag any machine's figures in any direction is reletive to the old credit system. If machineA crunches 3x faster then machineB, and has a benchmark and credit claim that's 3x machineB's, then noone is dragging anyone anywhere. On the other hand, if machineA has a credit claim that is 3X machineB, but only produces the same number of completed models per hour, then there is a disparity between the credit claimed and credit awarded. And it is a question of which reports first as to who drag whom. It can work EITHER direction. ====== I think everyone here will agree that 2 credits for a 24hr WU, for ANY machine isn't right. And I believe that issue was brought up on the first day or two of the new system, and CORRECTED. And if not, there IS a thread to report WUs that don't seem to have been awarded fair credit. The issue should be addressed there. ====== One point of order here. I believe the "new credit system" has changed slightly from it's original descriptions. Originally they had said they would release the WU on RALPH, and the avg credit claim on RALPH would be used to fix a credit value per model of the WU on Rosetta. So, under that scenerio, every reported model crunched for that WU on Rosetta would get identical credit. But they later changed that and decided to just go with a running average on Rosetta. So, the first reported WUs may see some noticeble variation in credit awarded per model. Add this signature to your EMail: Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might! https://boinc.bakerlab.org/rosetta/ |
Whl. Send message Joined: 29 Dec 05 Posts: 203 Credit: 275,802 RAC: 0 |
It can work EITHER direction. Yes that is very true Feet1st, it can with everything in between the lowest and highest end machines. My original point was, if we can remember that far back, was that highest end machines will always be dragged down by a percentage, as there is no other direction for them to go in. Unless of course there is an overclaiming client first in, which claims a very high amount. |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,821,902 RAC: 13,431 |
It can work EITHER direction. i don't think a high end computer is any more likely to be dragged down than it is up? I'd prefer a system that recalculates these after a certain number have been reported, or once all are reported, but as it stands we're only dealing with the first few results in where the major variation might occur. With the system as it stands, if a slow PC reports in first with a low benchmark (e.g. a low end Linux machine), no matter what machine follows, whether fast or slow, assuming a reasonable benchmark on the second, the credit will be reduced for both machines over what they should have. Likewise, if the first machine in is overclaiming, the second PC will get too much credit. A high end machine is just as likely to be dragged up as down though. If an overclaiming sloooow machine reports in first, and is then followed by the fastest machine on the planet that benchmarks accurately, both will receive more credit than they should for the decoy. The fastest machine will be dragged up! |
Whl. Send message Joined: 29 Dec 05 Posts: 203 Credit: 275,802 RAC: 0 |
........both will receive more credit than they should for the decoy...... So some will be getting less than they should and others will be getting more than they should at times. I thought this system was about actual work done Danny ? ;-) |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,821,902 RAC: 13,431 |
........both will receive more credit than they should for the decoy...... It is ;) It's not perfect, but it's a lot more accurate than the previous system, and there is no bias toward faster or slower systems, OS, CPU, config etc, with the exception of the first few results in as it's a rolling average. Slower systems don't drag the faster ones down and vice versa. cheers Danny |
Buffalo Bill Send message Joined: 25 Mar 06 Posts: 71 Credit: 1,630,458 RAC: 0 |
So if I've figured this out correctly, the first machines to report would be somewhat slow machines set for the shortest possible WU run time, the shortest possible que and to report as soon as the WU finishes. The slower machines will probably cut the WU off well before 1 hour, maybe after say one 45 minute decoy, while the faster machines with more decoys per hour will run closer to the full hour. Even then, what are the chances of being in the first ten returns? For me, I run 24 hours on my slow box and 6 hours on the others with a 1 day que of WU's. My chances of being in the first 1,000 are remote at best. I don't see this being an issue unless you are trying very hard to be the first return. Bill |
Feet1st Send message Joined: 30 Dec 05 Posts: 1755 Credit: 4,690,520 RAC: 0 |
Let's not forget, you're only talking about the first very small number of models reported in. Less then 1% of 1% of the total models that will be crunched. (if you're checking the math, 100,000 models, 1% is 1,000. And 1% of that is 10. Since all assumption is that the first of a WU to report back will have the shortest WU runtime, they will likely be reporting 1 or a very small number of models crunched). I don't see this being an issue unless you are trying very hard to be the first return. And even if you ARE TRYING very hard to be the first to return, the odds are still against you, and the results are still random as to whether you succeed. So there's no point in pursuing it. After all, even with such GREAT LUCK as to be the first to report... you're still only reporting an hour of work! What will you be doing with the REST of your week? And after 10 models have reported (ok, 50 would be better), you've got a pretty stable number you're building there. As for whether a fast machine is granted less then fair credit, this is all relative to the work produced by that fast machine as compared to it's BOINC benchmarks reported. If it produces work faster then the benchmarks predict, then it gets MORE credit then a slower machine. And so if you wish to pursue this topic further Whl, I suggest you present a specific "high end" machine, which is "dragged down" on credit. Without that, it doesn't seem to warrent further speculation by anyone. Add this signature to your EMail: Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might! https://boinc.bakerlab.org/rosetta/ |
Christoph Jansen Send message Joined: 6 Jun 06 Posts: 248 Credit: 267,153 RAC: 0 |
That is probably because you were close to first in with that workunit, with an overclaiming client first in (which would have been awarded what it claimed). My team mate was awarded 2 points for a valid 24 hour workunit (on an A64 4000+ if I remember correctly), Which does'nt seem to make any sense at all, even if you take averages into account. No, it is a stock client and you can look at all other results of that machine, they are equal and I wrote that. This machine constantly is awarded 40% more credits than with the old system. Got to the host and look it up if you do not believe it. OK, as there seems to be difficulties in seeing how averaging really works in this case, I will explain the system mathematically so that everybody understands how it works: Say there is a firm (BOINC) that has found out that it can use the IQ (benchmark) pretty well to calculate how much people will produce on average and thus pay them accordingly. Everybody does an IQ test before being employed. NOw let us say people with an IQ of 100 get 15$ per hour. There are three quys, Abe, Ben and Con. They all are 100ers in IQ but do different amounts of work, as the IQ test is not very specific for the task done. The employer decides to pool the money for all people and to deal it out after the work is done, according to the pieces produced by each person. Now this happens: Three people * 15$/hr = 45 $/hr Abe does 10 pieces, Ben does 15 pieces, Con does 20 pieces per hour. This means: Abe thus claims 1,5 $/piece Ben claims 1 $/piece Con claims 0,75 $/piece But with money being dealt out according to the overall work done, this happens: 45 $ / 45 pieces = 1 $/piece Abe gets 10 $ Ben gets 15 $ Con gets 20 $ In other words: Con gets more with the new pay system than he did before, because before he did more work per time for the same pay as his collegues. Why does he get more? Because the guys with the lower out put boost his pay as their claim is higher per piece). Is that so? Look at this: - There are 100 Abes, 20 Bens and 1 Con. - Together they will earn 1815 $/hr. - They will produce 1320 pieces/hr Average pay: 1815 ($/hr) / 1320 (pieces/hr) = 1,375 $/hr The Abes now earn 13,75 $/hr, the Bens get 20,625 $/hr and the Cons get 27,5 $/hr. The more Abes there are, the more you earn. The more Cons there are, the less will everybody earn. But as a Con you will always earn twice as much as an Abe, that will never change and that is the point: everybody is getting his share of what has been collected for the production of work equally. [I know that in reality this system would have to be adapted to what truly is earned in such a plant, so the more Cons you have, the higher the revenue will become. But BOINC does not "earn anything", it just applies benchmarks by default to calculate its "pay" to participants, so the example exactly fits the default BOINC situation.] Transferring this to the new system this means: a machine that produces double work than another machine gets double the credits of the latter machine. |
Christoph Jansen Send message Joined: 6 Jun 06 Posts: 248 Credit: 267,153 RAC: 0 |
Just to show you that host I was talking of, here is an image of part of its claimed/granted. Just select any other Core 2 Duo with the stock client and repeat it, you will find all of them get roughly 40% more than they claim on average. |
Trog Dog Send message Joined: 25 Nov 05 Posts: 129 Credit: 57,345 RAC: 0 |
Ok, here's a real live example to follow. I've posted in Ralph because it's a Ralph wu. Currently this wu is worth .329 credits per decoy. Obviously if nobody else posts their results in the thread it will be pretty impossible to track. |
Christoph Jansen Send message Joined: 6 Jun 06 Posts: 248 Credit: 267,153 RAC: 0 |
That is probably because you were close to first in with that workunit, with an overclaiming client first in (which would have been awarded what it claimed). My team mate was awarded 2 points for a valid 24 hour workunit (on an A64 4000+ if I remember correctly), Which does'nt seem to make any sense at all, even if you take averages into account. Another comment on this one, especially the "you were close to first in with that workunit, with an overclaiming client first in". It would rather be that you would get a lot of credits in that case, not so little. Here is the reason why: The BOINC client does not affect the number of decoys crunched, so it does not matter if you use the stock one or a 5.5.0. So all we need is to calculate the claimed credits per decoy and see how that affects the machine reporting second. Situation A: a stock client claims 60 credits for 120 decoys. You report 150 decoys after that. YOu will get: 150 decoys* (60 credits/120 decoys) = 150 decoys * 0.5 cr/de = 75 credits Situation B: an overclaiming client reports first, having a factor of 2.5 and thus claim 150 credits. You again report 150 decoys. You will get: 150 decoys * (150 credits/120 decoys) = 150 decoys * 1,25 cr/de = 187,5 credits In this respect overclaiming clients are comparable to older or "less suited" machines: their benchmark implies a greater crunching power than they have and does thus exaggerate the credits they get per decoy. So if you want to get good credits you need to be second after an overclaiming client or e.g. after a Mac as both will claim a lot more credits per decoy than the average machine that crunches Rosetta. The effect is: the more your machine's crunching power per benchmark unit exceeds that of the first machine the higher your result will be. So a G3 with 5.5.0 reporting first and a Core 2 Duo reporting second would probably max up the Core 2's credit. |
Saenger Send message Joined: 19 Sep 05 Posts: 271 Credit: 824,883 RAC: 0 |
My original point was, if we can remember that far back, was that highest end machines will always be dragged down by a percentage, as there is no other direction for them to go in. That's wrong. High end machines can have a good way up, if they use linux for example, and the stock client. Such a high end machine will get a boost in it's credits. The benchmarks of BOINC are unfortunately only comparable if you use a) stock only or same breed of "opt." only and b) same OS. Especially the artificially inflated benchmarks of "opt." clients were anything but meaningful in the whole picture. Only one type of high end puters is definitely been "dragged down", and that's PPC. The application is not suited for this machine, and so it gets less work done than a machine with similar benchmarks. Less work -> less credits, so if you're in this for the credits race, it's better not to use PPC on this project. If you're in it for the science, it's still very valuable. I would very much appreciate the feature "pending credit", until enough WUs are back to have a big enough sample for the averaging process. All this speculating about how to get most/least credits per WU, how to set the preferences to accomodate the credits better rather then the science, would vanish. But to insert my 2 cent as well: IMHO a (bunch of) highend puter with inflated benchmarks client and short runtime will increase the credits/decoy quite a lot, and thus drag everybody up. A (bunch of) highend puter with stock client, linux and a small runtime will do the opposite. A slow puter with a short runtime will have little effect, as it doesn't deliver so much work for averaging, regardless of being "opt." client or stock. |
truckpuller Send message Joined: 5 Nov 05 Posts: 40 Credit: 229,134 RAC: 0 |
On the old credit system my 1.6 duron running at 2.0Ghz was running around 250 or so and now it down to i think 211 my athlon xp2500+barton @2.0 was running about the same and its down to 235 my sempron64 2800+@2.0 is kicking out a rac of 289. So if they are all running around the same speed shouldnt the be fairly close in Rac.I just think the consistency of the RAC system is in error in some way or another and i suppose maybe i just dont understand it. Visit us at Christianboards.org |
tralala Send message Joined: 8 Apr 06 Posts: 376 Credit: 581,806 RAC: 0 |
On the old credit system my 1.6 duron running at 2.0Ghz was running around 250 or so and now it down to i think 211 my athlon xp2500+barton @2.0 was running about the same and its down to 235 my sempron64 2800+@2.0 is kicking out a rac of 289. So if they are all running around the same speed shouldnt the be fairly close in Rac.I just think the consistency of the RAC system is in error in some way or another and i suppose maybe i just dont understand it. The Sempron64 is the Athlon 64 architecture which is more efficient per MHz than the Athlon XP, so it is expected that he will do more work running with the same speed. Durons have less cache, which makes them a little slower for Rosetta, so again a bit slower than a comparable Athlon XP with more cache seems right. Strange enough all your three comps show 1 MB Cache which seems wrong for all three of your comps. |
Keck_Komputers Send message Joined: 17 Sep 05 Posts: 211 Credit: 4,246,150 RAC: 0 |
Strange enough all your three comps show 1 MB Cache which seems wrong for all three of your comps. The cache detection code is broken so 1 MB is substituted for all hosts. BOINC WIKI BOINCing since 2002/12/8 |
Message boards :
Number crunching :
RAC dropping
©2024 University of Washington
https://www.bakerlab.org