Posts by James

1) Message boards : Number crunching : Are INTEL systems not getting enough credit or are AMD systems getting too much? (Message 16196)
Posted 13 May 2006 by James
Post:


Gents,

A dual core system will usually get the same benchmark as a single core all else being equal. The advantage is that by using both cores you can process twice the work per unit time. So in effect this will double the CPU seconds you report. Under those conditions a dual core system will Claim twice as much credit as an otherwise identical single core system would. Basically it gets twice the throughput.

So you can look at this as twice the work in the same amount of time, or twice the credits in the same amount of time when compared to a singe core system. This also scales up to quad core systems.

In any case the benchmarks will be the same for two systems clocked at the same speed no matter the number of cores, because the benchmark is a function of clocking.

This of course ignores any effects of overclocking. That is a different subject


Overclocking is not entirely off-topic in that it does account for some of the 'discrepancies' of claimed credits. Specifically, if you take two x2 4800s and one is overclocked to 2.7 ghz and the other is at 2.4 ghz it will be comparing 'apples to oranges'. This assumes that the overclocking is done correctly, ie, the system is producing stable benchmarks (if you don't get stable benchmarks decrease your frequency and/or multiplier).

I happen to have my 4800 overclocked to ~2.67ghz as well as major tweaks on the memory clock, voltages, etc. I replaced the stock fan almost immediately with one that is large, hardly noticeable (stock runs at 3.5k rpm at full load), and decreases load temps by 5 C.

As for 'optimized' clients I believe the actual 'point' is to receive credits that relate to the systems actual performance rather than a boinc benchmark bias, which is all the client really does anyway, ie, the rosetta application does the crunching.

Another interesting point is that benchmarks are actually favored toward Intel machines due to the default compilation. AMD 'flags' are not part of the default one size fits all benchmark program which isn't the case with some 'optimized' clients. The point for those that use optimized clients is to get a benchmark that treats their machine fairly, rather than treat it as an Intel (which is what the benchmark is basically is compiled for).

As more and more people migrate to 'optimized' clients the incentive for others, who are in a 'competitive' frame of mind, increases. This *is* happening. It's becoming much more widespread.

It also shouldn't be an issue as the benchmarks attempt to accurately represent the systems true performance.

That aside, the best 'client' out there is Trux's calibrated client (non-optimized) that allows you to significantly 'play around' with boinc, ie, set cpu affinity, block the annoying popups, set process priority, set project priority, force results to be reported immediately, etc. It'd be nice if the boinc developers would do the same rather than have this proliferation occur, which sort of decreases their motivation to innovate - the boinc client/gui developers are also biased to the SETI project which has an optimized *application*.

Einstein@home also has an optimized application that is going to be integrated as default in the future. It vastly increases computing of WUs.

Again, claiming that there is 'unfair' credit suggests that the sole purpose of the project is credit when it is actually science that is important.

I'd also like to note that 'optimized' clients do *not* report default inflated scores but are based on your specific processor's capabilities. (the two legit ones are trux and crunch's). The ones that are 'suspect' are systems that are obviously running manipulated clients that were compiled to report completely false benchmarks as some individuals have done.
2) Message boards : Number crunching : RAC cheats, is this a problem (Message 12956)
Posted 2 Apr 2006 by James
Post:
Looking at some of the top computers shows that they may be (are!) exploiting the credit system. My computer with a standard Boinc client charges about 14 credits per hour while many of the computers in the top RAC list are charging 40 to 60 credits per hour. Is this reasonable? I think not!

In addition some of those computers appear to be doing 1 or 2 hours of work and claiming that they have done 4 to 8 hours of work thereby further inflating their credit claims.

Any fool can create a 'compile your own' Boinc client containing any number of credit exploits. All reputable Boinc projects should ONLY allow an official Boinc client - self compiled clients should be strictly prohibited. I realize that some individuals with odd computer hardware would then not be able to run Boinc projects. In such cases a review of their clients by project developers would be necessary or they would simply be forced to use compatible hardware or just not run Boinc projects.

The reason that an official Boinc client is needed is that many who now process the workunits will be discouraged by continuing credit exploitation (cheating) and simply refuse to contribute any more of their computer time. Potential new users, upon hearing about these exploits and finding that the project developers have failed to take any action, will also not contribute their computer time.

It should also be simple for any project to set a maximum credits per hour per workunit value and enforce it. I realize this is a simplistic solution and may not address all of the projects' requirements.

I find this credit exploitation offensive and the failure by the developers to take any action equally offensive. I contribute my computer time because I believe the science being done here is important. Perhaps I am the crazy one.



There really is not too much that a project can do about this. The BOINC code has been publicly released. It is very easy to compile it so it looks identical to the official version, and yet has some benchmark adjustments. Many of these adjusted clients have been made so they fall with in the normal range of credit claims, but at the top end of them.

In some cases the modifications to the BOINC client are simply taking advantage of special features of the computer for which they are compiled and are in fact producing a more accurate and legitimate benchmark for that system. The problem comes in when the project code does not use those features of the system. This causes the benchmark to be high, but the actual processing time to be slower, thus producing higher credit claims. In that case the project might be accused of not provide an application that takes full advantage of the computing power available, but they are hardly doing that intentionally, it is just difficult to have a special version for every possible special system out there.

As to a limit on the credit per Work Unit. This might work on other projects, but it will not on Rosetta. With the time setting it is possible to run a work unit from between 1 hour and days. With that level of variability where would you set the maximum credit? How would you verify it without overloading the servers? In any case you would still have to accommodate the broad range of possible credit claims made by different systems. With the time setting legitimate credit claims can average from 10 to over 2400 credits for a single work unit. Hourly claims range from 10 to around 50, depending on system speed. So this is just not a workable answer.

As far as the developers not taking action. They have deployed the standard BOINC system. While they have chosen not to use redundancy, the ability to make that choice is part of the server software. They have removed some of the more severe violations from the stats as they are identified. The project team is thinking about a variety of possible solutions to the credit issue, but right now they are focused primarily on killing the bugs in the application. Once that is done, they have stated publicly that they will return to the issue of credit claims and awarding of credits owed from a range of processing problems.


Mod, you are twisting this a bit. Regardless of an 'opimized' client a project can calibrate the claimed credits - look no further than einstein@home. They most definately adjust the credits to get rid of the use of inflated benchmarks.

As for the somewhat weak claim that people are merely doing this because boinc doesn't fully utilize their resources (which is the essence of your claim) that is a boinc issue and not a system issue. I know for a fact that AMDs are supported poorly in BOINC compilations in general. That doesn't mean I 'deserve' more credits.

Yes, the source code has been released to the public. Perhaps you should also note that the client doesn't crunch the WUs - the Rosetta application does. It has nothing to do with the project other than to enable Rosetta's app to run and to manage preferences. Rosetta controls the project and has chosen NOT to release their source (unlike SETI where optimization can occur) and has chosen NOT to offer system specific compilations to maximize cpu efficiency.

Which is neither here nor there. RAC cheating is an issue in that it seems to be a vanity issue where people feel the need to be in the 'top computer' section. Given that the run times of these WUs are known it's not exactly like you can't figure out how to adjust credits - Einstein has.
3) Message boards : Number crunching : 600,000 second/165 Hour/7 day WU!!! (Message 12955)
Posted 2 Apr 2006 by James
Post:
I'll grant credit for this extreme circumstance. We may consider granting credit for all time out errors in the future.

I am really wondering why I never actually received credit for this work unit, even after being promised it would be granted to me...


Change your max timeout settings, perhaps using tux's xml script (not the OPTIMIZED client, the 'calibration' client that won't artificially inflate your benchmarks) that comes with his boinc client. This should have been 'killed' way before 600k seconds.

For example, Rosetta runs 120 minute work units. I 'kill' all WUs that do not complete after 145 minutes. You can 'tweak' your preferences:)

As for the credit issue, I have sympathy because I have participated in the climate projects and had unrecoverable errors at 50+ percent ( you know the MASSIVE as in WEEKS/MONTHS WUs). I did get credit though.

Change your settings so you don't have it happen again.

This part isn't addressed to you:

Credit should be granted for 'real' processor usage. Rosetta, unlike say Einstein, does not calibrate WU times. It's getting to be pretty sickening in general because there are 3800s/2.+ghz machines that are claiming massive amounts of credits based upon unreal benchmarks. I overclock my 4800 from a stock 2.4ghz to 2.7ghz for each core and I know that a 3800 can't get 3 times my floating and integers:) The same is true for the 2ghzs intels that are doing the same thing.

I'm not necessarily upset about the 'cheating' but it encourages others to do the same and it creates almost amusing benchmarks on the top computers pages.

The 1 percent error is annoying - so is the fact that Rosetta has yet to incorporate a calibration feature like, say, Einstein that grants credit where credit is deserved, not manipulated artificially.
4) Message boards : Number crunching : Give credit where credits due (Message 12953)
Posted 2 Apr 2006 by James
Post:






©2024 University of Washington
https://www.bakerlab.org