Optimized Client?

Message boards : Number crunching : Optimized Client?

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Terminal*

Send message
Joined: 23 Nov 05
Posts: 6
Credit: 7,845,878
RAC: 0
Message 11082 - Posted: 21 Feb 2006, 7:31:41 UTC

Is there an optimized client for certain CPU's like there is for SETI@home?
ID: 11082 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Astro
Avatar

Send message
Joined: 2 Oct 05
Posts: 987
Credit: 500,253
RAC: 0
Message 11083 - Posted: 21 Feb 2006, 7:32:54 UTC
Last modified: 21 Feb 2006, 7:33:50 UTC

Welcome to Rosetta

nope, No Op app


ID: 11083 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Terminal*

Send message
Joined: 23 Nov 05
Posts: 6
Credit: 7,845,878
RAC: 0
Message 11085 - Posted: 21 Feb 2006, 8:10:00 UTC - in response to Message 11083.  

Welcome to Rosetta

nope, No Op app


ty :) hope to enjoy this as much as seti
ID: 11085 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
kattanweb

Send message
Joined: 28 Oct 05
Posts: 2
Credit: 1,257
RAC: 0
Message 11319 - Posted: 24 Feb 2006, 13:40:43 UTC

Why no?
with an optimized Seti@home i can make at least the double of WU compared to the non-optimized one...

this might be a reason to make me stay longer with Seti until rosetta have optimized clients...
ID: 11319 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Astro
Avatar

Send message
Joined: 2 Oct 05
Posts: 987
Credit: 500,253
RAC: 0
Message 11321 - Posted: 24 Feb 2006, 13:56:45 UTC - in response to Message 11319.  

Why no?
with an optimized Seti@home i can make at least the double of WU compared to the non-optimized one...

this might be a reason to make me stay longer with Seti until rosetta have optimized clients...

First and formost it's because they haven't released the sourcecode. No code, nothing to optimize. Degrees of improvement vary depending on the app. It might already be as optimized as possible. Only they know.
ID: 11321 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 11325 - Posted: 24 Feb 2006, 14:04:47 UTC - in response to Message 11319.  

Why no?
with an optimized Seti@home i can make at least the double of WU compared to the non-optimized one...

this might be a reason to make me stay longer with Seti until rosetta have optimized clients...



I do not get you're reasoning ?

Why would not having an apparently optimised science application stop you from running here?

If you don't crunch then no work gets done, if you crunch the normal app then work get done. Surely some is better than nothing, even without the opimised app it is still 3 times faster than seti due to having no redundency..
Team mauisun.org
ID: 11325 · Rating: 0.99999999999999 · rate: Rate + / Rate - Report as offensive    Reply Quote
kattanweb

Send message
Joined: 28 Oct 05
Posts: 2
Credit: 1,257
RAC: 0
Message 11333 - Posted: 24 Feb 2006, 16:00:17 UTC - in response to Message 11325.  

I do not get you're reasoning ?


knowing that with an optimized version i can do at least twice the work at the same time, and accepting to do half now discourages me a bit: making the processor usage 100%, inaddition to increasing both temp power consumption to do half a job is not very acceptable.

it's like getting a Ferrari and driving it at 50 mph.
ID: 11333 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
David Baker
Volunteer moderator
Project administrator
Project developer
Project scientist

Send message
Joined: 17 Sep 05
Posts: 705
Credit: 559,847
RAC: 0
Message 11338 - Posted: 24 Feb 2006, 16:21:53 UTC

Rosetta is written in standard C++ which we have spent a huge amount of time trying to optimize. when compiling for the different architectures, we use the highest level of optimization available.
ID: 11338 · Rating: 2 · rate: Rate + / Rate - Report as offensive    Reply Quote
James

Send message
Joined: 8 Jan 06
Posts: 21
Credit: 11,697
RAC: 0
Message 11899 - Posted: 11 Mar 2006, 21:56:34 UTC - in response to Message 11082.  

Is there an optimized client for certain CPU's like there is for SETI@home?


There are two different kinds of optimization that can be performed. One is on the Rossetta application that the UW team has put together and the other is with the actual boinc client, i.e., boinc.exe .

Since Rosetta's source code isn't out you can't optimize it (such as compiling it to more closely match your system, etc).

Boinc.exe on the other hand can be optimized. If you look at the 'top' computers and compare similar processor (go for say amd athlon 3800s or 4400s) you will notice that some have double the benchmarks of the others. They are running an optimized boinc client.

You can find these through google. Assuming you have a current processor (within the last two years really) you would want the SSE2 version.

If you can't figure it out, you can ask me.

ID: 11899 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
James

Send message
Joined: 8 Jan 06
Posts: 21
Credit: 11,697
RAC: 0
Message 11900 - Posted: 11 Mar 2006, 22:02:00 UTC - in response to Message 11899.  

Is there an optimized client for certain CPU's like there is for SETI@home?


There are two different kinds of optimization that can be performed. One is on the Rossetta application that the UW team has put together and the other is with the actual boinc client, i.e., boinc.exe .

Since Rosetta's source code isn't out you can't optimize it (such as compiling it to more closely match your system, etc).

Boinc.exe on the other hand can be optimized. If you look at the 'top' computers and compare similar processor (go for say amd athlon 3800s or 4400s) you will notice that some have double the benchmarks of the others. They are running an optimized boinc client.

You can find these through google. Assuming you have a current processor (within the last two years really) you would want the SSE2 version.

If you can't figure it out, you can ask me.


I meant to tell you that boinc and the projects do not encourage the use of 'optimized' boinc clients. There is a warning about that at the boinc project's page. I did not want to sound like I was encouraging it, just wanted to put it out in the public domain.

ID: 11900 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
James

Send message
Joined: 8 Jan 06
Posts: 21
Credit: 11,697
RAC: 0
Message 11920 - Posted: 12 Mar 2006, 2:47:47 UTC - in response to Message 11900.  

When I say Einstein is 'optimized' what I'm really saying is it won't take a bogus request for work unit credit, generally. People still manage to 'cheat' by importing bogus benchmarks and sending them to the scheduler.

Again, I'm not sure it's cheating, but it annoys people and might turn them off to the project. People want to know that everyone is 'playing fair'. I'm not exactly how you would make you feel good about yourself by reporting bogus benchmarks and credit requests. All that really matters are the work units.

ID: 11920 · Rating: -1 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile TritoneResolver

Send message
Joined: 28 Nov 05
Posts: 4
Credit: 57,690
RAC: 0
Message 11970 - Posted: 13 Mar 2006, 6:17:12 UTC

If you're looking to crunch more WU's, an optimized app won't work since Rosetta WU's are now CPU run-time dependent. Also, when optimizers compile a science app., they usually sacrifice accuracy/thoroughness for speed. The team at Rosetta probably doesn't want their users to be returning "skimmed-over" WU's when there's so much precision involved in this science.
ID: 11970 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
MAOJC

Send message
Joined: 19 Jan 06
Posts: 15
Credit: 2,727,567
RAC: 0
Message 11978 - Posted: 13 Mar 2006, 12:46:28 UTC - in response to Message 11970.  

If you're looking to crunch more WU's, an optimized app won't work since Rosetta WU's are now CPU run-time dependent. Also, when optimizers compile a science app., they usually sacrifice accuracy/thoroughness for speed. The team at Rosetta probably doesn't want their users to be returning "skimmed-over" WU's when there's so much precision involved in this science.



That is just not true, Most of these client apps are compilied for generic i386 or i686 processor to fit the widest range of CPUs avaialble. If the CPU capabilities like MMX, SSE, or x86_64 are added then the performance goes up. Also adding in complier optimizations can improve the perforance as well. All of this without changing the source code one bit. On Rosetta more work done in a given time frame = more points.
ID: 11978 · Rating: -0.99999999999999 · rate: Rate + / Rate - Report as offensive    Reply Quote
Lee Carre

Send message
Joined: 6 Oct 05
Posts: 96
Credit: 79,331
RAC: 0
Message 12007 - Posted: 14 Mar 2006, 9:04:35 UTC - in response to Message 11970.  

If you're looking to crunch more WU's, an optimized app won't work since Rosetta WU's are now CPU run-time dependent.
ahem, yes it will work, it'll just get more done in the same amount of time, which is exactly what rosetta needs, therefore fewer WUs need to be sent out

nevermind the fact that the science gets one faster, leading to usable results being available sooner
Want to search the BOINC Wiki, BOINCstats, or various BOINC forums from within firefox? Try the BOINC related Firefox Search Plugins
ID: 12007 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jayargh

Send message
Joined: 8 Oct 05
Posts: 23
Credit: 43,726
RAC: 0
Message 12158 - Posted: 17 Mar 2006, 19:18:31 UTC - in response to Message 11338.  
Last modified: 17 Mar 2006, 19:54:04 UTC

Rosetta is written in standard C++ which we have spent a huge amount of time trying to optimize. when compiling for the different architectures, we use the highest level of optimization available.


I do beleive you are somewhat wrong about this Dr.Dave ...all do respect intended...ie: I get rewarded in Seti with being able to use sse3....not on a standard app but an OPTIMIZED app..... the app runs significatny faster than a sse2 app on same machine...now are you telling me that when your app recognizes I am using sse3 it will automatically optimize to that compared to lets say my mmx library? I haven't seen evidence of this...so optimization is allowing the faster computers to go even faster and you have not done this yet from what I can tell....

ID: 12158 · Rating: 1 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 12164 - Posted: 17 Mar 2006, 21:18:48 UTC - in response to Message 12158.  

I do beleive you are somewhat wrong about this Dr.Dave ...all do respect intended...ie: I get rewarded in Seti with being able to use sse3....not on a standard app but an OPTIMIZED app..... the app runs significatny faster than a sse2 app on same machine...now are you telling me that when your app recognizes I am using sse3 it will automatically optimize to that compared to lets say my mmx library? I haven't seen evidence of this...so optimization is allowing the faster computers to go even faster and you have not done this yet from what I can tell....

Over at Distributed Folding, this topic was brought up numerous times. Dr. Howard Feldman mentioned that the DF client was spending 90% of its time transversing pointers - a task that was not idealy suited to hand optimization for 3dnow!, SSE, or Altivec. One of the Mac users even ran the client through a profiler to see how much of the code could be switched to take advantage of Altivec; and the small amount was not considered significant enough to justify hand coding a special version of the code just for the Mac users.
And while we'd have to get one of the programmers out to give us an analysis of what the Rosetta client is doing and whether the code could be significantly sped up by turning on 3dnow, SSE, and Altivec optimizations - I shared this as a case where a life science DC app would not have benefitted to the extent of a more mathematical oriented DC app.

Since they're still working at tracking down and getting rid of some rather annoying bugs, I'd hope they'd get rid of the majority of the bugs that we participants are complaining about before making changes that can create other hard to track down bugs.

And if the client was say.. 4 times faster, how would we be able to tell? We download a WU, and produce at least 1 model from it.. and if there's time, produce more until we reach the max time setting. If our max time setting is 2 hours.. and the WU's first model takes 8 hours to create.. then our system works for 8 hours on that first model. If the first model takes 15 mins, then our system creates 8 of them, and stops working on that WU at 2 hours.
We get our score from the benchmark Boinc runs, and the amount of time we spend on the WU. Every day.. we can get credit for 24 hours of crunching times our benchmark. Speed the client up 4 times, and we get credit for 24 hours of crunching times the exact same Boinc benchmark. It doesn't matter if we're now crunching that 8 hour model in 2 hours.. or 32 of those 15min models in 2 hours.

After they (David Kim is the or one of the main programmers, right?) get the bugs ironed out, having them analyse and create optimized clients if this app will benefit would be wonderful. They'll also have to change the way credits are calculated to keep us happy.
ID: 12164 · Rating: 0.99999999999999 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jayargh

Send message
Joined: 8 Oct 05
Posts: 23
Credit: 43,726
RAC: 0
Message 12165 - Posted: 17 Mar 2006, 21:24:48 UTC - in response to Message 12164.  
Last modified: 17 Mar 2006, 21:58:50 UTC

[quote]
And if the client was say.. 4 times faster, how would we be able to tell? We download a WU, and produce at least 1 model from it.. and if there's time, produce more until we reach the max time setting. If our max time setting is 2 hours.. and the WU's first model takes 8 hours to create.. then our system works for 8 hours on that first model. If the first model takes 15 mins, then our system creates 8 of them, and stops working on that WU at 2 hours.
We get our score from the benchmark Boinc runs, and the amount of time we spend on the WU. Every day.. we can get credit for 24 hours of crunching times our benchmark. Speed the client up 4 times, and we get credit for 24 hours of crunching times the exact same Boinc benchmark. It doesn't matter if we're now crunching that 8 hour model in 2 hours.. or 32 of those 15min models in 2 hours.

After they (David Kim is the or one of the main programmers, right?) get the bugs ironed out, having them analyse and create optimized clients if this app will benefit would be wonderful. They'll also have to change the way credits are calculated to keep us happy.


You can tell if you are doing more easy.... [and I have not brought up any credit issue on this possibility, on purpose, other than saying "rewarded" by SETI] It tells you in the graphics area how many models you have run...so now I can run say 30 avg. in 4 hours using sse3 (if even possible as you have brought up) or 15 avg. on standard.... Is this not not producing the science faster?(I am hoping for a doubling not a 4x as per your example) After all thats what I am doing at SETI and Einstein now also......Is this issue not worth bringing up? I mean I see all these Boinc projects clamouring for more output yet these easy ones to get more seem to slip by....Also open-source code unless product is copyrighted. Just cause YOU haven't figued out an optimized doesn't mean no one else can...as proved in Seti and Einstein.

ID: 12165 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 12170 - Posted: 17 Mar 2006, 23:07:53 UTC

You can tell if you are doing more easy.... [...]It tells you in the graphics area how many models you have run..


Here's the data from my last 5 WUs.. on a machine crunching 24/7..

FA_RLXlo_hom025_1louA_361_186_0
cpu time:86750.03125
This process generated 161 decoys

FA_RLXai_hom025_1aiu__359_487_0
cpu time:86099.265625
This process generated 148 decoys

FA_RLXbq_hom022_1bq9A_359_271_0
cpu time:86419.15625
This process generated 226 decoys

FA_RLXce_hom002_1cei__360_55_0
cpu time:86215.828125
This process generated 216 decoys

HB_BARCODE_30_1ig5A_351_4908_1
cpu time:86166.90625
This process generated 241 decoys

So we have 5 different WUs that produced between 148 models and 241 models on my machine in about 24 hours of runtime each. There's much faster, smaller WUs that would have produced even more models. (Such as the ones that caused the Rosetta servers to be overwhelmed a month or two ago.) And, no doubt, even larger, longer WUs. The average speed of one WU doesn't match the average speed of another WU; so I'm back to the same question: how are you supposed to tell how much faster each new client is than the last?

As it is, I can pull out FA_RLXai_hom025_1aiu__359_487_0 and HB_BARCODE_30_1ig5A_351_4908_1, and make claims about speed improvement that aren't valid. (Two different molecules, processed two different ways, with the same client..)


ID: 12170 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jayargh

Send message
Joined: 8 Oct 05
Posts: 23
Credit: 43,726
RAC: 0
Message 12172 - Posted: 17 Mar 2006, 23:22:23 UTC - in response to Message 12170.  

You can tell if you are doing more easy.... [...]It tells you in the graphics area how many models you have run..


Here's the data from my last 5 WUs.. on a machine crunching 24/7..

FA_RLXlo_hom025_1louA_361_186_0
cpu time:86750.03125
This process generated 161 decoys

FA_RLXai_hom025_1aiu__359_487_0
cpu time:86099.265625
This process generated 148 decoys

FA_RLXbq_hom022_1bq9A_359_271_0
cpu time:86419.15625
This process generated 226 decoys

FA_RLXce_hom002_1cei__360_55_0
cpu time:86215.828125
This process generated 216 decoys

HB_BARCODE_30_1ig5A_351_4908_1
cpu time:86166.90625
This process generated 241 decoys

So we have 5 different WUs that produced between 148 models and 241 models on my machine in about 24 hours of runtime each. There's much faster, smaller WUs that would have produced even more models. (Such as the ones that caused the Rosetta servers to be overwhelmed a month or two ago.) And, no doubt, even larger, longer WUs. The average speed of one WU doesn't match the average speed of another WU; so I'm back to the same question: how are you supposed to tell how much faster each new client is than the last?

As it is, I can pull out FA_RLXai_hom025_1aiu__359_487_0 and HB_BARCODE_30_1ig5A_351_4908_1, and make claims about speed improvement that aren't valid. (Two different molecules, processed two different ways, with the same client..)



As I said AVERAGE.... My credits average out over a LONG period of time ...so would my models....now I grant you it would take hard scrutiny over a period of time to see if it were working .....but..... I believe ADMIN could tell us easily that more models WERE in fact being run .... as it benefits THEM to do so :)

ID: 12172 · Rating: 1 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jayargh

Send message
Joined: 8 Oct 05
Posts: 23
Credit: 43,726
RAC: 0
Message 12175 - Posted: 18 Mar 2006, 2:37:57 UTC
Last modified: 18 Mar 2006, 3:25:47 UTC

I would also like to add that this is a "production" project ...you seem to want to solve all the minor problems before any leaps in production can occur.....not an alpha or a beta.... They ought to be open to such queries on open source code unless copyrighted in the case of LHC six-track or CPDN... As SETI has done.... Einstein,Rosetta,or Predictor should allow open source-code to advance thier projects if possibble...Truly I think the science benefits us most offering the open source ...not the cheaters as they are going to find a way to cheat ANYWAYS...no less valid results allowing open sorce.....my 2 cents...
ID: 12175 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
1 · 2 · Next

Message boards : Number crunching : Optimized Client?



©2024 University of Washington
https://www.bakerlab.org