Message boards : Number crunching : The cheating thread
Previous · 1 · 2 · 3 · 4
Author | Message |
---|---|
Moderator9 Volunteer moderator Send message Joined: 22 Jan 06 Posts: 1014 Credit: 0 RAC: 0 |
So they're doing the equivalent of running a test decoy/model from every WU on a machine in the lab, and determining an official number of points for each WU's decoys/models? It is my understanding that is not how it works. My "GUESS" would be that it is more likely a look up table for different machine types and speeds with some kind of correction factor. But that is only a Guess. Perhaps Tony will chime in here with the actual answer. What ever they are doing, it is accurate to within .1 credits for very different machine speeds. Moderator9 ROSETTA@home FAQ Moderator Contact |
Astro Send message Joined: 2 Oct 05 Posts: 987 Credit: 500,253 RAC: 0 |
Best I can figure the formula is (fpops_cumulative/1e9)*(100/86400). The science app contains a multiplier, which they're playing around with now. the Fpops cumulative() api reports claimed credit to the project server from the host. Open source science app could manipulate the "multiplier" to cheat (unless I'm reading this wrong, and I very well could be). the wiki says: Boinc fpops cumulative The title of this article is incorrect due to technical limitations. The correct title is boinc_fpops_cumulative(). General This API call allows the Science Application to pass a total number of FLOPS to the BOINC Client Software and from there to BOINC Server Software for use in determining Claimed Credit. Version Information This call first appeared in Version 4.46, but was buggy. Bug fixed in Version 5.2.6. Devoloper Information Code Location: boinc/api/boinc_api.C,h Function: boinc_fpops_cumulative() Documentation: boinc/api/boinc_api.C CVS Extract: cvs -d :pserver:anonymous:@alien.ssl.berkeley.edu:/home/cvs/cvsroot checkout boinc/api/boinc_api.C Retrieved from "http://boinc-wiki.ath.cx/index.php?title=Boinc_fpops_cumulative" See this post and quite frankly the whole thread makes for a good read as to how they're developing this. tony |
mikus Send message Joined: 7 Nov 05 Posts: 58 Credit: 700,115 RAC: 0 |
My "GUESS" would be that it is more likely a look up table for different machine types and speeds with some kind of correction factor. But that is only a Guess. That approach might not cover oddball cases. The computer I use for Rosetta is "vanilla" (no clockings have been modified), but for another project I use a "strawberry" computer that has been under-clocked! Its BIOS reports the nearest "CPU type" it knows about, but the actual performance of that system is *better* than for a system with an actual CPU chip of that "type". . |
XS_Vietnam_Soldiers Send message Joined: 11 Jan 06 Posts: 240 Credit: 2,880,653 RAC: 0 |
I think sometimes I see things too black and white. To me it seems a simple thing: WU A has 600 "points" that need to be measured for lack of a better word. It gets 60 points.. WU B has 500 points that need to be measured. It gets 50 points. You get the idea. I imagine it is a bit more complicated than that but I am sure the Baker people could assign a value to each WU with out too much difficulty. If I'm wrong on that please advise me. That takes care of the different amounts of work that different WU require and the machines speed(time) in doing them sorts out the rest Simple? Am I missing something? Kills the cheating? I might sound like a broken record but I fail to see why people want to make something that is simple so complicated. you have a fast machine, it does more work units in a day than a slower machine it gets more points. No Benchmarks, no need for charts for different cpu's, no need for anything. The complexity of the work unit and the speed of the machine decide all the issues.. |
Astro Send message Joined: 2 Oct 05 Posts: 987 Credit: 500,253 RAC: 0 |
I think sometimes I see things too black and white. I agree. However, rather than spending their own money designing and implementing software specifically for the needs of Rosetta. They chose Boinc. There are a couple good reasons for this. One It's completely free, and two, you immediately get access to a huge possible database of volunteers just itching to give away their cpu cycles to a worthy cause, and three, existing boinc users, FAH users, and other DC project users aren't as likely to add a whole seperate program to their machines that makes them balance "priority" settings and such on a constant basis, not to mention cutting production rates by trying to run two memory eating apps at the same time. So, Since they chose Boinc, they're stuck with what boinc offers (which is constantly changing to meet the needs of all projects/user). LHC has varied wus, some run a "predictable" amount of time, some smash into the wall and finish early. LHC couldn't possibly fairly assess the value of a given result before it actually crunched it. Seti also has short/long wu, WU of varied angle ranges, and just plain noisey (emi contaiminated) wus that finish in seconds. Other projects have their own unique concerns. So, Boinc settles for what "will work" for all projects, even though it's not the best for any particular project. do you see my point? tony |
Moderator9 Volunteer moderator Send message Joined: 22 Jan 06 Posts: 1014 Credit: 0 RAC: 0 |
I think sometimes I see things too black and white. The plan fails when the workunits are different sizes, unless the system can be made to distribute the same amount of large Work Units and small to every system. Under your plan these would have different score values and we already know they take different times to process. Moreover with the new flexible user time setting it just won't work. The answer is the new scoring system at SETI beta. It will get here eventually. Until then I would expect nothing to change. Moderator9 ROSETTA@home FAQ Moderator Contact |
FluffyChicken Send message Joined: 1 Nov 05 Posts: 1260 Credit: 369,635 RAC: 0 |
I think sometimes I see things too black and white. The basic flaw in that design is that it assumes each 'point' is of a fixed time length. That is not always the case. Now with Seti-beta they have found out that fpops works in general, but at each angle extreme it breaks down (this is where fpops doesn't work). i.e. it takes longer to, or it is very quick to, do a calculation before it can get back to saying 'done an fpop' This is what I believe they're 'fudging' at the moment. It means they are having to put a linear (for them at the moment) fudge factor based on the angle range. Angle is just one of their data parameters AND is also what caused the problem in the original seti (but there it cause task to be quicker and slower, hence the move to a 'work done' system). Anyways, will it work over here ? It needs to be tested and to me Rosetta is good project for BOINC to test at ;-) b.t.w. the reason for the the 'linear fudge factor' in seti-enhanced(beta) is not to stop cheating, but to stop people preferencialy taking the quick jobs (since they give the same credit but in a shorter space of time, although they actualy do the same number of calculations per task) Since you'll have a load of people dumping jobs, to select the quicker one's to accure credit quicker BUT the project (seti) will be left with reissuing or even unfinished tasks/jobs) Well so I understand. Team mauisun.org |
anders n Send message Joined: 19 Sep 05 Posts: 403 Credit: 537,991 RAC: 0 |
Seti Enhanced is up for release. Anders n |
TioSuper Send message Joined: 2 May 06 Posts: 17 Credit: 164 RAC: 0 |
From my vantage point of "minor inter pares ", I dont see why wasting time and resources in the "rewards" aka credit tweaking is worth the time , money and effort. I just noticed that the recent spike in Wu produced can be traced in large part to a duel by two teams who in the last 28 days were responsible for approximately 37 % of the production share. If you add a third team (who seems to have engaged in the shoot out) we are talking about 50% of the total Rosetta production in the last 28 days. It seems to me that the actual credit structure is motivating them to place more computing resources into Rosetta. Call me naive but if having for points competition is motivating those three behemoths to produce more for Rosetta, then the project wins. So let the fight for credit continue ( From what I have been able to ascertain the three teams in question are enjoying the shoot out with subtle baitings of each other and a realization that at the end the more they produce benefits more than their bragging rights: Rosetta, the project (Science) wins. PS : I wonder how many of the participants even notice the credits and rac they have accumulated to even care how they are calculated? |
dag Send message Joined: 16 Dec 05 Posts: 106 Credit: 1,000,020 RAC: 0 |
Take it from an analytic, they know. dag --Finding aliens is cool, but understanding the structure of proteins is useful. |
Message boards :
Number crunching :
The cheating thread
©2024 University of Washington
https://www.bakerlab.org