Posts by Biggles

41) Message boards : Number crunching : Help us solve the 1% bug! (Message 10766)
Posted 15 Feb 2006 by Profile Biggles
Post:
This work unit has been stuck at 1% for 25 hours now. I've only just noticed. You still wanting me to test it outside BOINC? I've suspended it for now.

For what it is worth, the computer is a Pentium M based laptop, running Windows XP and the Crunch3r SSE2 optimised BOINC client, latest version.


Ran this via the command line with the switches xx 256b A -output_silent_gz -silent -increase_cycles 10 -new_centroid_packing -nstruct 10 -constant_seed -jran 968001 and it passed 1% fairly quickly.

Resumed in BOINC and it reset itself, but didn't get stuck this time.

Bummed about losing over a day of CPU time though.
42) Message boards : Number crunching : Help us solve the 1% bug! (Message 10695)
Posted 12 Feb 2006 by Profile Biggles
Post:
This work unit has been stuck at 1% for 25 hours now. I've only just noticed. You still wanting me to test it outside BOINC? I've suspended it for now.

For what it is worth, the computer is a Pentium M based laptop, running Windows XP and the Crunch3r SSE2 optimised BOINC client, latest version.
43) Message boards : Number crunching : Approximate RAC question. (Message 6610)
Posted 18 Dec 2005 by Profile Biggles
Post:
Upon consideration, I guess the question was less to do with RAC, and more about what it would benchmark at. Many thanks to Webmaster Yoda for finding the Housing and Food Services hosts to compare to. How did you do it? I can expect a measured floating point speed in the range of 1050-1100 million ops/s, which clock for clock, is similar to an Athlon XP. I'm happy with that.

However, my 2.93 GHz Celeron D that gets a measured floating point speed of 1318 million ops/s is getting moved off of Rosetta. 244% of the clockspeed of the Tualatin core based machine, only 122% of the floating point speed. Clock for clock, it's doing half the work. I think it's going to get stuck on GIMPS.

Thanks for all the comments folks.
44) Message boards : Number crunching : Approximate RAC question. (Message 6462)
Posted 16 Dec 2005 by Profile Biggles
Post:
I'm looking to move my machines around with the end of FAD. One of my borged machines is a 1.2 GHz Celeron, based on the Tualatin core Pentium 3. I don't know of anyone running something like this, so I don't know what sort of benchmark it would get, assuming a standard client and Windows.

Anyone got one for me to compare against? Or know somebody who does?

Cheers
45) Message boards : Number crunching : The cheating thread (Message 4369)
Posted 26 Nov 2005 by Profile Biggles
Post:
Just a few thoughts.

I'm not going to participate in a project rife with cheating. I know there will always be some on all projects, but there is a difference between a little and a lot. Many people feel the same way, especially from the stats driven teams.

What about ignoring the BOINC benchmark for credit purposes and have a benchmark within Rosetta itself? That way it wouldn't be open to tampered BOINC clients.

Optimised clients are a good thing if they speed up crunching and cut the crunching times. It wouldn't be fair to do things twice as fast as everyone else and only get half the credit for it. So flop counting would make it far more fair, and would make the use of optimised clients a good thing.

The guy who is mentioned further up the thread, with the huge RAC from a Pentium 4, is anonymous. It was pointed out he could have legit production from a whole bunch of machines and just have merged them. We of course can't tell without being able to view his computers. What about turning off merging? I know it could cause things to be messy if we re-install clients etc. But we could lump them together under an inactive installs heading and just hide them. That way we could tell if production in the case above was legit or not.
46) Message boards : Number crunching : code release and redundancy (Message 3134)
Posted 14 Nov 2005 by Profile Biggles
Post:
I'm here for both the science AND the stats.

With regards to redundancy, I find it difficult to have much faith in unverified results. How much important data do you write off due to unstable machines before you consider it worth having redundancy? How do we know we haven't already missed the most important result because it got corrupted? The other boon to having redundancy is that it partially minimises the effects of credit cheats or "tweakers".

As for releasing the source code, I think you should in a limited way. By that I mean you should release the actual scientific processing code, so that programmers out there can optimise it if possible. Let them compile it and test it and in the event that it is better, have them submit it to you. But don't give credit for that. When you are happy that it is a valid improvement, add some security/verification code that doesn't get released to the public to the improved code and release it as the standard client.

This way people can contribute, but if they use the source to manipulate and fake results, they don't get credit for it. And if they come up with valid improvements, everybody benefits.

Hope that was clear enough.
47) Message boards : Number crunching : Linux vs Windows point awards (Message 2223)
Posted 4 Nov 2005 by Profile Biggles
Post:
OK well in that case ignore my previous post.

However, with regard to redundancy, is there any performed? I wouldn't feel particularly comfortable with there not being any since all it would take is an unstable computer to give bad results leading to bad research. At least if there is some redundancy you can pick up on that.
48) Message boards : Number crunching : Linux vs Windows point awards (Message 2207)
Posted 3 Nov 2005 by Profile Biggles
Post:
Doh ... optimize it maybe?


Well... if by optimize you mean "complete a WU faster"... then that will not work. If the linux client runs faster it will claim even less credit then it is claiming now. The amount of credit claimed is based on how long the WU took.

That is why there are optimized boinc clients. The optimized boinc client increases the claimed credit from a optimized project by optimizing/increasing the benchmark results.



My understanding is that with BOINC, credit = CPU time * benchmark score. Except this is averaged against all those who complete a WU, with the greatest and lowest claims discarded.

For simplicity's sake, let us say that 5 users complete the same WU. Let us also say that we are running an optimised client, which took less time to complete meaning we claimed less credit. We'll claim 45 for instance. Now everyone else took longer to run the WU than we did, even if they have identical computers, because they are running standard clients. So they claim 50, because they took longer than us. And to make the example nice and easy, we'll have one guy who took longer still and claimed 55.

Therefore, BOINC would discard the greatest and lowest claimed credits, 45 and 55 in our example, and award the average of the remaining claimed credits, this being 50.

So we got 50 credits in less time than everyone else, because we used an optimised client. Does that make sense? Or am I just wrong?
49) Message boards : Cafe Rosetta : Find-a-Drug Refugees (Message 2106)
Posted 2 Nov 2005 by Profile Biggles
Post:


Previous 20



©2024 University of Washington
https://www.bakerlab.org