Message boards : Number crunching : Jack's Pseudo-Redundency Proposal
Author | Message |
---|---|
Paul D. Buck Send message Joined: 17 Sep 05 Posts: 815 Credit: 1,812,737 RAC: 0 |
Jack suggested the use of what he called Pseudo-Redundency in another thread as: If we had unlimited development time, I think the best system would be one along the lines of Hermann's pseudo-redundancy. One could take the median time of the Work Units that differ only by random number seed. I believe the Work Units are structured such that thousands are sent out at a time that fit this requirement. This would mean that very reliable statistics could be generated about the average cpu requirements of a Work Unit. This could then be used to assign credit. It would be far less noisy than 2 fold or 4 fold redundancy. Because I don't have a true hacker's mentality I'll need help understanding the cheating loopholes in this system. It seems to me that this would not be that difficult to implement within the current BOINC framework. The down-side would be that all the Results would be part of a very large Work Unit. An alternative (forgive me as a database person I tend to think in database terms) would be to simply continue to issue work as it is currently and to calculate the grant based upon the average claim for the same protien. Even simpler, and close to an existing system is to forget being fancy and just do a query NOW, get the average CS/sec number, inflate it by some margin to be generous and use a system similar to CPDN where you get a "flat-rate" grant based on CPU seconds. The floor is open ... ==== edit added links |
John McLeod VII Send message Joined: 17 Sep 05 Posts: 108 Credit: 195,137 RAC: 0 |
Major loop hole. Overclock until the results are worthless. Get 20% more work "done" but the results would be complete trash - and undetected. BOINC WIKI |
Message boards :
Number crunching :
Jack's Pseudo-Redundency Proposal
©2024 University of Washington
https://www.bakerlab.org