Message boards : Number crunching : The cheating thread
Previous · 1 · 2 · 3 · 4 · Next
Author | Message |
---|---|
STE\/E Send message Joined: 17 Sep 05 Posts: 125 Credit: 4,100,936 RAC: 131 |
Live and let Live Stephan No, what I'm saying is either present Irrefutable Proof that somebody is Cheating or just give it a rest ... |
Aurora Borealis Send message Joined: 7 Oct 05 Posts: 15 Credit: 352,300 RAC: 0 |
This tread is very interesting and basically a rehash of similar threads on Seti-Classic several years ago. As interesting an intellectual debate as it is, it won't prevent the cheating from occurring. I can remember the first copy protection system on diskettes for the Commodore 64. They got cracked within weeks. Sony's latest and greatest copyright protection can be overcome with a little bit of scotch tape. There is unfortunately little that can be done against an intelligent, resourcefully cheat. Questions? Answers are in the BOINC Wiki. Boinc V6.12.41 Win 7 i5 GPU Nvidia 470 |
Biggles Send message Joined: 22 Sep 05 Posts: 49 Credit: 102,114 RAC: 0 |
Just a few thoughts. I'm not going to participate in a project rife with cheating. I know there will always be some on all projects, but there is a difference between a little and a lot. Many people feel the same way, especially from the stats driven teams. What about ignoring the BOINC benchmark for credit purposes and have a benchmark within Rosetta itself? That way it wouldn't be open to tampered BOINC clients. Optimised clients are a good thing if they speed up crunching and cut the crunching times. It wouldn't be fair to do things twice as fast as everyone else and only get half the credit for it. So flop counting would make it far more fair, and would make the use of optimised clients a good thing. The guy who is mentioned further up the thread, with the huge RAC from a Pentium 4, is anonymous. It was pointed out he could have legit production from a whole bunch of machines and just have merged them. We of course can't tell without being able to view his computers. What about turning off merging? I know it could cause things to be messy if we re-install clients etc. But we could lump them together under an inactive installs heading and just hide them. That way we could tell if production in the case above was legit or not. |
anders n Send message Joined: 19 Sep 05 Posts: 403 Credit: 537,991 RAC: 0 |
Here is a tread has some thoughts on fixing the problem. http://setiathome.berkeley.edu/forum_thread.php?id=21906 Anders n |
j2satx Send message Joined: 17 Sep 05 Posts: 97 Credit: 3,670,592 RAC: 0 |
I'm not saying a sufficiently motivated person couldn't find ways around some of these - but it would hopefully get rid of the most blatant attempts and, by having a declared punishment system, you have a deterrent. I would suggest just submitting the users / hosts that you think the project admins ought to verify. I absolutely do not think you should label them as cheaters. If they are, let the project take care of the issue. |
nasher Send message Joined: 5 Nov 05 Posts: 98 Credit: 618,288 RAC: 0 |
well stephan i agree that there are deifnatly some cases were it looks like people are cheating... my recomendation is take the people who you have suspicion of cheating and mabey in e-mails with the admin here present your question of are they cheating ... the admin here im sure has more access to the stats and can tell you if 25 people decided to become #6 stat or such or if someone turned in 1 job worth 43210 credits. posting there names here probaly wont work since i doubt if many people who are cheating blaintantly or such care about what we think in the message boards. no matter what you do there will always be some cheating.. heck a few years ago people said it was cheating if you supercooled your computer and turned up the proccesing speed... now alot of people do that normaly. heck on 1 project people said otheres were cheating cause they had 2 or more computers runnin the same name. wish i knew a way to limit the amount of cheating but i just do not have enough knoledge about how people are cheating to be able to make responses |
stephan_t Send message Joined: 20 Oct 05 Posts: 129 Credit: 35,464 RAC: 0 |
:-) :-) :-) BTW cheers guys I had a bit of well-needed fresh air - feeling a lot less miffed now. I'll follow your advice Nasher. :-D Team CFVault.com http://www.cfvault.com |
FluffyChicken Send message Joined: 1 Nov 05 Posts: 1260 Credit: 369,635 RAC: 0 |
Why not take the benchmark into the apps themsleves. Then it would take all the problems away from boinc manager ? Optimised clients could then have the benchamrks internaly (and also a fudge factor could be added to even out platforms for the same work done). Ok for some project that are open source you will need other protections. This also means you can put a benchmark relative to the type of computing your are doing. Why benchmark for instructions your not even using ? Also increase the frequency of the benchmark so it gives a bettwr representation of 'now' It doesn't need to be a long a complex benchmark either as it'll be relevant to the crunching. Anyway this also helps in other places. 1) Modern computers have types of throttling. So a) You benchmark at full speed - computer gets hot, it throttles and you computer is now doing less work. But your still earning at the higher work rate - Cool & Quiet again benchmark at higher rate, but it see's it as an idle process so drops to it's lowest speed. Similar thing with laptops dropping or just plain people running the benchmark at an over clocked speed then dropping the multiplier. But for 4 days it's quite happy giving you the higher work rate. While I wouldn't class C&Q or throttleing as cheating, your not giving actual work done but 'potential' work done scores. Maybe the developers should have a chat to the Find-a-Drug people, their Time x CPURating work quite well and used a v short benchmark every 10 mins or so then used some damped type of average as it went a long. Of course as soon as you open up that part anyone can write what they want to send in the results they want. Hence for closed source Apps moving it into the program may work. (Of course remembering to balance the Work Done (cobblestone still ?) with other projects ;) ) Team mauisun.org |
Scott Brown Send message Joined: 19 Sep 05 Posts: 19 Credit: 8,739 RAC: 0 |
Well, just a quick point...cheating is, by definition, doing something that is expressly forbidden (e.g., copying someone else's test answers, using soemone else's writing as your own, etc.). Since there is no rule against using an optimized BOINC client, it cannot be called cheating. @Stephen Be very careful what you chose to post publicly regarding any specific claims of cheating. Those individuals, given the 'public' nature of such a posting, could take legal recourse against both you and the project (if it allowed the posting). |
nasher Send message Joined: 5 Nov 05 Posts: 98 Credit: 618,288 RAC: 0 |
Well everyone has there own idea of cheating i dont concider nessarly an OPTIMISED client as cheating unless it is setup to give you BOOSTED SCORES( ie 2x or greater what you should have gotten) if the optimised client actualy makes the work go faster or levels your points to what you should be getting i wouldnt call it cheating myself i am running standard clients since i really havent spent the time lookin at the optimised ones to determin if they are actualy going to help the projects or just my score if i really want my score to go up i can always add another computer or 2 or 120 or such and get my score way up there as far as legal ramifications of calling a person a cheater... um as far as i know it wont hurt your social or financial life so i dont think they could be blamed for deformation of charactor not to mention boinc and rosetta are world wide programs so um... i dont know what internatinal law would say about it. |
stephan_t Send message Joined: 20 Oct 05 Posts: 129 Credit: 35,464 RAC: 0 |
Since there is no rule against using an optimized BOINC client, it cannot be called cheating. Funny. First I never said anything against optimized clients. Read my comments again - if someone uses optimized clients to 'even the playing field' (win32 vs *nix) - good for them. But hey for some reason someone decided this was a thread about non vs optimized clients. It isn't, because you can cheat with the regular client, too. Bottom line is that credit = bench * cpu time. Meaning that someone who fakes his/her benchmarks gets much higher credits, without actually helping the project. The WUs ALWAYS take the same amount of time - optimized client or not (read this sentence again). @Stephen First of all, that's nonsense. Second, Poorboy was right, I tried to denounce cheating and now I'm 'made to be the bad guy'. Third, I'm getting tired of those patronizing, omnimous comments about what I can or cannot say on those boards. Finally, let's lock/delete/ignore this thread - I'd much rather do something productive and crunch than having to be on the defensive constantly. The owners of the project probably have read this thread and are very much aware of the problem - I trust they'll find a good solution for it in time - they have been stellar so far. Finally, this project is great - so newbies, please do join - and don't let this nonsensical thread discourage you :-D Did you see the new screensaver? It's awesome. Team CFVault.com http://www.cfvault.com |
Scott Brown Send message Joined: 19 Sep 05 Posts: 19 Credit: 8,739 RAC: 0 |
Since there is no rule against using an optimized BOINC client, it cannot be called cheating. Funny, that part of my comment was clearly directed at the thread rather than you directly (unlike my second comment). @Stephen Sorry, I wasn't clear here. I did not mean a standard forum posting, but rather was commenting on your proposed "list" creation. Such a public listing would be subject to more serious legal scrutiny. More importantly, my comment was merely a warning as I did not want to see you or the project in hot water. It sure as hell wasn't (and was not intended to be) patronizing!
Done. |
XS_Vietnam_Soldiers Send message Joined: 11 Jan 06 Posts: 240 Credit: 2,880,653 RAC: 0 |
These are just a couple that caught my eye.I know xeon processors very well and what they will and won't do.Take a look at this one, with his bench they could take this machine and replace NASA's SGI setup with it! https://boinc.bakerlab.org/rosetta/show_host_detail.php?hostid=210800 Another, there is no way in god's green earth that a machine with these specs will post these benchmarks: https://boinc.bakerlab.org/rosetta/show_host_detail.php?hostid=192676 An AMD Opteron or X2 will but not an Intel Xeon system. This is not something I think, this is something I know. I realise that sounds awefully arrogant but I work with these and build them for a living and no P4-xeon based system running on windows will post these benchs, Not even watercooled and OC'd to death. And another: this is a P4 with HT. NOT even close to possible! https://boinc.bakerlab.org/rosetta/show_host_detail.php?hostid=180767 and another: https://boinc.bakerlab.org/rosetta/show_host_detail.php?hostid=174429 These guys are so blatant! Do they need the points that badly? Are their egos that fragile that they aren't satisfied with the correct points? Ok, I'm smiling, end of rant!<BG> I was just reading a bit more in this thread about what is and is not cheating. I use Intel based machines that take a heck of a hit for points as compared to the AMD systems.That doesn't bother me..much. IF Rosetta wants to kill the cheating totally, they can do this: Assign a value to a WU. an arbitrary number but relative to the other work units that are sent out based on some dificulty factor, for example 50 points. Pete's Quad dualcore AMD Opteron 885 system crunchs the unit in 10 minutes. Mike's dual dualcore AMD Opteron 285 system crunches the unit in 30 minutes. Mary's dual P4 xeon 3600 system crunches the unit in 40 minutes. Alice's opty 165 crunches the unit in 40 minutes. Betty's P4-3000 Dell crunches the unit in 2 hours Harry's P4-2000 crunches the unit in 3 hours. EVERYONE gets 50 points for that unit. TIME is what makes the difference in your scores. The machines themselves take care of all the cheating issues. Benchmarks are not used at all. They are not needed.You get your points strictly on the work that you accomplish. this doesn't cover any bad work units, thats another topic and from what I have seen the baker people have done an outstanding job working on those issues. There may be reasons beyond my knowledge why such a system can't be used but if it could think of all the man hours that could be saved. just my thoughts folks..Feel free to criticise.<BG> Dave |
Tribaal Send message Joined: 6 Feb 06 Posts: 80 Credit: 2,754,607 RAC: 0 |
The original Seti@home used that for of credit system, using WU completed rather than the current form. I can't remember why they dropped it on top of my head, but I remember having the feeling it was justified and well thought. Could it be that theses machines are master nodes to clusters? This would explain some of it... BOINC probably would state the master's specs instead of the whole network's, but the turnaround time for a WU would be really high... Just a thought. Keep crunshin' ;) - trib' |
Moderator9 Volunteer moderator Send message Joined: 22 Jan 06 Posts: 1014 Credit: 0 RAC: 0 |
The original Seti@home used that for of credit system, using WU completed rather than the current form. In part to stop cheating, in part politics. A number of people found ways to turn in Work units that they actually did not process in very short time periods that would validate. It is interesting to note that the new credit system being tested on SETI Beta, provides the same credit value for a Work Unit, no matter how fast the system did the work. So it seems we will be coming full circle very soon. Moderator9 ROSETTA@home FAQ Moderator Contact |
AMD_is_logical Send message Joined: 20 Dec 05 Posts: 299 Credit: 31,460,681 RAC: 0 |
These are just a couple that caught my eye.I know xeon processors very well and what they will and won't do.Take a look at this one, with his bench they could take this machine and replace NASA's SGI setup with it! I checked some of the WUs from that machine and the number of models they generated, and I would say it's getting about 8 times the credit that an Athlon XP running the recommended Windows client would get for the same work. Another, there is no way in god's green earth that a machine with these specs will post these benchmarks: Actually there is. The machine's real time clock could be running really really slow. :) IF Rosetta wants to kill the cheating totally, they can do this: This could work. The length of time it takes to do a model is, on average, quite consistant for a given type of WU. Ralph could be used to determine the per-WU credit for new WU types. I like the idea of credit being linked to the amount of work done rather than to some arbitrary benchmark. |
XS_Vietnam_Soldiers Send message Joined: 11 Jan 06 Posts: 240 Credit: 2,880,653 RAC: 0 |
The original Seti@home used that for of credit system, using WU completed rather than the current form. That's essentially what I am talking about. Give a credit value to the work unit and let the timeframe that it takes to work that unit sort out the scoring. Simple and easy. No more cheating..hopefully! <BG> |
FluffyChicken Send message Joined: 1 Nov 05 Posts: 1260 Credit: 369,635 RAC: 0 |
The original Seti@home used that for of credit system, using WU completed rather than the current form. Sounds simple though it's taken time to implement and hopefully Ralph (when Roms back) may start to test it, purly since Rom should be here and they really need something to compare to seti-enhanced. Mod9, I forget, but do you know where they put the fudge factor in to adjust the fpops score between architectures and other projects (currently set at 7 at seti-beta). If it's server side alls good, but I think it is in the science app. Only really a problem on project with no redundency that are open-source as it would be harder to spot if someone has fiddled that number. Benchmark*Time is suppose to represent the Work-done, but since boinc and hence the benchmark are open-source and the fact it's only done evey 4 day by default it's largely flawed ;-) If you couldn't alter it and it was taken regularly, say ~every 10 mins, but was actually a brief actual science computation rather than a rather irrelevant Whetstone/Drystone benchmark it may have worked. Team mauisun.org |
Moderator9 Volunteer moderator Send message Joined: 22 Jan 06 Posts: 1014 Credit: 0 RAC: 0 |
The original Seti@home used that for of credit system, using WU completed rather than the current form. My understanding is that the SETI Beta system puts all the credit generation bits on the server side. The client simply reports its time. The details of how that works are somewhat obscure to me, as somehow the server would have to have some idea of the speed of the system. I suppose it might calculate that from the completion time. In any case I found it interesting that systems with wild speed differences reporting very different CPU time for a Workunit still get the same credit claims. It does not seems to matter if the client is optimized or not. It all makes sense in that a fast system will get more credit per hour, because it will do more Work Units per unit time, but it does seem like a lot calculating when counting the number of Work units processed would yield the same effect for keeping score. Moderator9 ROSETTA@home FAQ Moderator Contact |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
So they're doing the equivalent of running a test decoy/model from every WU on a machine in the lab, and determining an official number of points for each WU's decoys/models? |
Message boards :
Number crunching :
The cheating thread
©2024 University of Washington
https://www.bakerlab.org