Message boards : Number crunching : Another solution for the credit issue that hasn't been mentioned.
Author | Message |
---|---|
Tallbill Send message Joined: 23 Jul 06 Posts: 12 Credit: 101,854 RAC: 0 |
I dont have time to read every message everywhere, but I havn't seen this recommended yet. Why not go with the system of measuring credit done based on cpu and performance as planned, but instead of lowering scores, set the benchmarks up to be equal to what the optimized clients do now. As far as I know, it was never against the rules to be using an optimized client, so instead of punishing the people who have, instead use those levels to set the standard of what a WU is worth, and take the data back to february to RAISE everyone up to an equal level, instead of lowering everyone to an equal level. The only problem left here is that credits will be worth more then other boinc projects, but this is an independant project that can measure its work however it wants. Statistic sites will just have to skew results as they see fit. I hope this makes sense, and really wouldn't piss people off or have issues with a second data column or separate scoring. Edit - I've read that credits wont be backdated whatsoever, but this idea could still apply towards future credits. Multiproject parity isn't as important as equality within a project. |
Jose Send message Joined: 28 Mar 06 Posts: 820 Credit: 48,297 RAC: 0 |
I dont have time to read every message everywhere, but I havn't seen this recommended yet. Why not go with the system of measuring credit done based on cpu and performance as planned, but instead of lowering scores, set the benchmarks up to be equal to what the optimized clients do now. At last a rational person. :) |
Saenger Send message Joined: 19 Sep 05 Posts: 271 Credit: 824,883 RAC: 0 |
At last a rational person. But that's just what I said several times, suddenly, with a different sender, you agree. Strange. |
Vester Send message Joined: 2 Nov 05 Posts: 258 Credit: 3,651,260 RAC: 636 |
With four optimized computers, I found that a 300% benchmark improvement due to optimizaton gave only a 60% RAC increase. Reducing or increasing by three-fold won't be fair to anyone, so adjustments need to be carefully considered. |
[B^S] thierry@home Send message Joined: 17 Sep 05 Posts: 182 Credit: 281,902 RAC: 0 |
I dont have time to read every message everywhere, but I havn't seen this recommended yet. Why not go with the system of measuring credit done based on cpu and performance as planned, but instead of lowering scores, set the benchmarks up to be equal to what the optimized clients do now. I don't see realy what is the difference by leveling things to the top instead of the bottom ..... |
Jose Send message Joined: 28 Mar 06 Posts: 820 Credit: 48,297 RAC: 0 |
At last a rational person. Saenger: you included the "backtracking" word.. He is not. Also he is not interested in crossproject equality. This is where your position and his differ: Edit - I've read that credits wont be backdated whatsoever, but this idea could still apply towards future credits. Multiproject parity isn't as important as equality within a project. There is a huge difference in what he is recomending and what you want. |
Ethan Volunteer moderator Send message Joined: 22 Aug 05 Posts: 286 Credit: 9,304,700 RAC: 0 |
The backdating idea was to apply the new credit system to past results. The idea of calculating credits based on what the optimized clients reported, then applying those values to standard clients, hasn't been talked about (that I've seen). It's a slipperly slope, but I think comments on this idea would be useful. |
[B^S] thierry@home Send message Joined: 17 Sep 05 Posts: 182 Credit: 281,902 RAC: 0 |
Jose, explain please. |
Jose Send message Joined: 28 Mar 06 Posts: 820 Credit: 48,297 RAC: 0 |
With four optimized computers, I found that a 300% benchmark improvement due to optimizaton gave only a 60% RAC increase. Reducing or increasing by three-fold won't be fair to anyone, so adjustments need to be carefully considered. The issue becomes developing a good, solid, and fair to all set of correction factors.. It can be done. |
carl.h Send message Joined: 28 Dec 05 Posts: 555 Credit: 183,449 RAC: 0 |
I did WU 3565878 so did Thierry I got 90 credits using 5.5 he got 60....give Thierry 30 more.... That type of thing Thierry back to February. The problem here is X Boincers will not agree or shouldn`t ! Not all Czech`s bounce but I`d like to try with Barbar ;-) Make no mistake This IS the TEDDIES TEAM. |
[B^S] thierry@home Send message Joined: 17 Sep 05 Posts: 182 Credit: 281,902 RAC: 0 |
I did WU 3565878 so did Thierry I got 90 credits using 5.5 he got 60....give Thierry 30 more.... OK, but getting 30 more or remove you 30 is exactly the same, isn't it? |
Jose Send message Joined: 28 Mar 06 Posts: 820 Credit: 48,297 RAC: 0 |
Jose, explain please. Why dont we use this that was in one of the threads that was deleted and was recieving some fair comments as a starting point ... may I suggest the following procedure: For every old protein For every procedure used on this protein (RelaxAll, IgnoreAll, whatever ...) For every WU returned calculate the Credit claimed Per Decoy generated (CPD) Next WU returned sort CPD in a descending order ignore the top 3% values Take the highest CPD value after that as being fixed (CPDf) For every WU returned Calculate corrected credit = decoys * CPDf Adjust the awarded credit in the data base to be the new calculated credit. Next WU returned Next procedure Next Protein Please note: The reason to ignore the top 3% values (and no more, or will affect employee A) is simple. This clears out (takes care of) all the: potential rare PC flukes a unit run that was too lucky user "edited" the actual CPU time or benchmark etc.. By applying the above procedure on each protein/procedure pair independently, we remove the effect of the protein properties (length, etc..) and procedure complexity from the equation. The end result of this is the equivalent of a quorum of thousands of WUs of the same protein/procedure. No participant will lose credit, and the few questionable cases will be automatically adjusted in line downwards (thus saving the huge ongoing task of identification/verification/clarification). The total credit will end up being consistent with the actual work done (the widgets) while the unit price (dollar per widget) is fixed. Equality, and hopefully peace restored. |
Jose Send message Joined: 28 Mar 06 Posts: 820 Credit: 48,297 RAC: 0 |
I did WU 3565878 so did Thierry I got 90 credits using 5.5 he got 60....give Thierry 30 more.... It is not that simple Carl. :) |
Saenger Send message Joined: 19 Sep 05 Posts: 271 Credit: 824,883 RAC: 0 |
Saenger: you included the "backtracking" word.. He is not. Also he is not interested in crossproject equality. So did he in most of his post, only in the edit he said: "OK, if not, then at least for the future." That's what I said several times. I can live without real stats for the past, it would only be nicer to have them. And the values are a decision, the project has to take, but that will have an effect on the participants. If it doesn't fit in the BOINC scheme, I think quite a lot users will be lost. I don't know how hard it would be for Willy, Zain, Neil e.a. to put a correction factor in place for their stats pages if the values don't fit, or if they simply have to ditch Rosetta from the stats because of incompability. |
[B^S] thierry@home Send message Joined: 17 Sep 05 Posts: 182 Credit: 281,902 RAC: 0 |
Jose, explain please. Jose, I asked you about the Tallbill proposal. You never answer to a direct question, do you. |
Jose Send message Joined: 28 Mar 06 Posts: 820 Credit: 48,297 RAC: 0 |
I did WU 3565878 so did Thierry I got 90 credits using 5.5 he got 60....give Thierry 30 more.... Thierry look at what I am proposing. Focus on that. |
[B^S] thierry@home Send message Joined: 17 Sep 05 Posts: 182 Credit: 281,902 RAC: 0 |
I did WU 3565878 so did Thierry I got 90 credits using 5.5 he got 60....give Thierry 30 more.... You always answer by something else, I give up for today (if I can). |
Jose Send message Joined: 28 Mar 06 Posts: 820 Credit: 48,297 RAC: 0 |
Jose, explain please. I am answering why I think the proposal can be workable. But if you and your friends only came to this thread to continue banging your heads and pick up a fight with me . Then , it takes two to fight. I wil ignore you as a person who only wants to fight and not find a solution to the issues. But , warning, I will react to your attempts to flame here with the same passion that I have. The time has come to look for solutions. Some reasonable, working and fair compromises have to be reached . If you want to stay in your purist high horse ..you are welcome but then thon comkplaim where the only reaction you get is scorn. |
Saenger Send message Joined: 19 Sep 05 Posts: 271 Credit: 824,883 RAC: 0 |
Thierry look at what I am proposing. It's about retroactive levelling of the playing field. That's fine, and that's just what I always wanted. The values are at first (at least from a project-intern point of view) irrelevant. You can norm the credits per decoy at whatever you want, as long as it's consistent. I don't see any difference with norming them to the bottom, norming them to the top, or even norming them to twice the top value in project-intern effects. The only difference the value makes is the compability to BOINC. |
carl.h Send message Joined: 28 Dec 05 Posts: 555 Credit: 183,449 RAC: 0 |
Hold up let me get a picture ! Saenger and Jose agree on a possible way to backdate the credit system !;-) Not all Czech`s bounce but I`d like to try with Barbar ;-) Make no mistake This IS the TEDDIES TEAM. |
Message boards :
Number crunching :
Another solution for the credit issue that hasn't been mentioned.
©2024 University of Washington
https://www.bakerlab.org