Message boards : Rosetta@home Science : Feedback, .. bandwidth usage :-(
Previous · 1 · 2 · 3 · 4 · Next
Author | Message |
---|---|
FZB Send message Joined: 17 Sep 05 Posts: 84 Credit: 4,948,999 RAC: 0 |
not sure if it is a out of the box feature but you might get in contact with the einstein@home guys, they have a system where you dl a larger datafile for which you get assigned multiple wu's that all work with that datafile -- Florian www.domplatz1.de |
SwZ Send message Joined: 1 Jan 06 Posts: 37 Credit: 169,775 RAC: 0 |
Thank you, FluffyChicken! I sent mail and post message for everyone notice about my asking. :-)
Who is CPDN? |
SwZ Send message Joined: 1 Jan 06 Posts: 37 Credit: 169,775 RAC: 0 |
I think, that this have some misunderstanding. If I right understand blackbird and I suppose same idea arise from David Baker message Each WU is doing ten independent folding trajectories. If we cut this down to two, for example, jobs would run 5 fold faster, but the traffic would go up five fold. Once we have more hardware in place we should be able to deal with more traffic. So I suggest haw about control under number of trajectories, id est WU computational cost? It acceptably increase in 100 times, which bring to decrease effective traffic in 100 times :-) I mean that user can set this number of trajectories and default value is 10. Who want fast WU completion those decrease this number, who want small traffic increase number. Question about dual evaluation same trajectories by different users and it comparision easy solve, since trajectory full defined it start configuration (include seed random number). I think that blackbird this mean when say
But David Baker mean compresion by zip not change of protocol when say
And quite other mean FZP, when suppose ask einstein@home to help merge WU in one datafile. Sorry if I mistake! |
Rebirther Send message Joined: 17 Sep 05 Posts: 116 Credit: 41,315 RAC: 0 |
Thank you, FluffyChicken! I sent mail and post message for everyone notice about my asking. :-) CPDN=ClimatePrediction.net :p |
David E K Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 1 Jul 05 Posts: 1018 Credit: 4,334,829 RAC: 0 |
I think I'm repeating myself. These are all ideas we are aware of. We'll look into them but implementing will require some work/time to develop and test. Sending the same protein in batches at a time will not improve bandwidth unless we make the input files (particularly the large fragment files) "sticky" and/or use locality scheduling. Application compression would be nice but it gets pushed out only once and then stays on the client so it is a one time burden for each app update. I like the idea of giving the user the option to increase the size of the workunit. We'll look into this. |
FluffyChicken Send message Joined: 1 Nov 05 Posts: 1260 Credit: 369,635 RAC: 0 |
Application compression would be nice but it gets pushed out only once and then stays on the client so it is a one time burden for each app update Although given you (with our help) are trying to improve the app, so these updates will happen. Is that not what this is all about in the first instance ? Every little bit of bandwidth saving helps, especially on dial-up or capped/pay-as-you-go broadband. Given other boinc projects implement it then it can be done and the code is already there to implement. So far I've had 4 client updates, times that by 4 computers running through the current dial-up 5MB each that's about 80Mb just for science apps, I know other with similar setups. Ok so I noticed this and 'baby sat' the downloads on to one computer and transfered accross to the other to save me money. That was within 1 to 2 months of me joining. Ok so it's not as bad as the job files though.... Team mauisun.org |
David Baker Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 17 Sep 05 Posts: 705 Credit: 559,847 RAC: 0 |
Yes. if we sent you WU that were ten times longer, you would have ten times less traffic. there have been complaints about work units being too long already though. if you could choose your WU length, how many people would like significantly longer ones? |
STE\/E Send message Joined: 17 Sep 05 Posts: 125 Credit: 4,103,208 RAC: 204 |
how many people would like significantly longer ones? ========= Personally I wouldn't care for that because of 1 reason. I already lose enough time each day because of Computation error's (That I know are not my Computers Fault)& WU's getting Stuck @ certain % Points. Some of these WU's can take me upwards of 7-9 Hours & I get irritated enough when 1 Error's out after 5 or 6 hours or are stuck @ whatever % Point for 5-6 hours or more. If the WU's were 10 times longer now I would have to take the chance of WU's Erroring out maybe after 50 or 60 hours or the WU getting stuck @ the 40 or 50 hour mark... :/ I simply wouldn't want to take the chance on losing that much CPU time again & again ... |
SwZ Send message Joined: 1 Jan 06 Posts: 37 Credit: 169,775 RAC: 0 |
I like the idea of giving the user the option to increase the size of the workunit. We'll look into this. Yes! :-)
Problem was appeared if all thajectories must collected before reporting bunch results. Then any error invalidate all trajectories and make large bunch unacceptable for calculation. But I think each trajectory can reporting independently. So realy one WU (with one big fragments file) can long evaluated and virtualy splited for small trajectory subunits each from it can reported independently. Thus we reduce number even those breaked WU which now haven! |
blackbird Send message Joined: 4 Nov 05 Posts: 15 Credit: 93,414 RAC: 0 |
I'm thankful to SwZ for clearing my ideas (two russians can understand each other even in english :). If the WU will run 5 times slower (without any lost of scientific value, of course), and transfers can be compressed with LZMA 3 times better, the traffic can be decreased 15-fold. WU completion time is a psychological question, bandwidth is a financial and time-consuming question for participants. |
Deamiter Send message Joined: 9 Nov 05 Posts: 26 Credit: 3,793,650 RAC: 0 |
I would certainly choose a much longer WU size -- anything up to around a week really (at 25% say around 48 hours). PoorBoy is right though that you'd probably want to fix the "leave in memory" error before making this standard. And rather than using a sliding scale, you might consider simply adding an option for "large WU" which runs somewhere between 3 and 10 times longer (as an arbitrary guess). A lot of us who are running Einstein and CPDN are used to the idea of longer WUs. Yes, it'd have to be more stable than it is now, but once you had all the kinks worked out, I imagine you could get most of the serious crunchers to use the longer WUs. |
rbpeake Send message Joined: 25 Sep 05 Posts: 168 Credit: 247,828 RAC: 0 |
A lot of us who are running Einstein and CPDN are used to the idea of longer WUs. Yes, it'd have to be more stable than it is now, but once you had all the kinks worked out, I imagine you could get most of the serious crunchers to use the longer WUs. As long as the scientific value is no less for the longer WU's compared with the shorter WU's, otherwise it matters not to me (I am fortunate enough to have broadband access). So whatever works best for the project works best for me. :) Regards, Bob P. |
nasher Send message Joined: 5 Nov 05 Posts: 98 Credit: 625,341 RAC: 647 |
I always love the idea of compression and such.. another idea is to do something like they did back at FaD alot of the jobs there were able to use the same large files and the ones that couldnt it would download the other sets of large files (ONCE PER SET) then keep all those sets on the computer (asuming HD space isnt a problem) that way say you have curently 10 difrent sets of large ascociated files to start with (all sets we shall call 4MB for sake of argument) if a person downloads his first of these jobs he gets the 4MB common file and say a .4mb file.. the next work unit say uses the same large file so it downloads the next .4mb file.. later he gets a difrent common file (another 4MB) but it keeps the other 4mb file still.. now he has a 2 in 10 chance of only D/L the small job itself. it would be low bandwith to check to see if client has Largefile03.??? rather than downloading it again... also to save hard drive size when say Largefile42.??? is no longer used you can send another line if Largefile42.??? exists then delete (thus freeing up HD space). its an imperfect world but if we can fix 1 problem we may be able to fix the next easyer Nasher P.S. BTW I am in the USA on an Unlimited DSL upload/download capacity, but i understand the reasons why people want smaller files and i agree with them one thing i was always told about computer programing you have 3 options - Fast, Cheep, Good... you may pick only 2 |
River~~ Send message Joined: 15 Dec 05 Posts: 761 Credit: 285,578 RAC: 0 |
Longer wu would also (if I understood a previous thread) mean less variation in the length of WU, so reduce another problem. Against that is the issue of slow clients. Does BOINC offer any mechanism for a user to say "I want long WU" or "I want short WU" - that would be the ideal. On another grid computing project I contributed to there were 4 sizes of wu, lengths of 2, 3, 5, or 10 arbitrary units. You started on size 3 and then adjusted to suit your own preferences once you discovered how long the size 3 took. River~~ |
SwZ Send message Joined: 1 Jan 06 Posts: 37 Credit: 169,775 RAC: 0 |
Against that is the issue of slow clients. Does BOINC offer any mechanism for a user to say "I want long WU" or "I want short WU" - that would be the ideal. I saw "Rosetta@home control resource share and customize graphics" in user profile. I think we can there set "number of trajectories per one WU". But nessesary exact sense of "trajectory" for developing this mechanism sure... May be its documented someplace, if so please point to this doc. Now I think for one protein we got a big number (may by ~1000000) of independently evaluated monte-carlo + Full atom MD trajectories, each of it full defined starting configuration and condition and as result collect some data (no too big). For evaluating trajectory we need in Rosetta program (upload only updates), constant library files (for example sortlib, upload only updates), and big protein specific data (fragments of AA chain templates?, upload for each protein and protocol of evaluation). After this CPU can crunch data without upload although 1000 year ;-) periodicaly sending small results to Rosetta@home. In principle one user can work with only one protein. (May be this not intresting psyhologicaly, but science result value not dependent from it) Realy "WU" life time bounded period of computational experiment in this configuration (about one week) or wishing user go to other protein. Sorry for my fantasing... :) |
NJMHoffmann Send message Joined: 17 Dec 05 Posts: 45 Credit: 45,891 RAC: 0 |
I don't know anything about the application used. So it may be totally nonsence what I write here. Is it possible to split data the way Einstein does? That is: send one <large_file> (protein data?) and one WU (parameters for the algorithm) at first contact with the user. The next WUs only send new parameters as long as there are WUs for this protein. If all of them are done the next WU will contain the instruction to delete <large_file> and sends <next_large_file>. Norbert |
David Baker Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 17 Sep 05 Posts: 705 Credit: 559,847 RAC: 0 |
this is what we would like to do. in the wu argument lists, you will see a "-nstruct 10 " which means make ten structures. if we can make the number following -nstruct a user defined parameter, it would be great; people on dial up connections could use for example -nstruct 50. the question is how to do this within the boinc setup. |
David E K Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 1 Jul 05 Posts: 1018 Credit: 4,334,829 RAC: 0 |
We can add such a feature/preference in the next application update. It can be a scaling factor or something similar to what is described below that users can set in their project specific preferences. |
STE\/E Send message Joined: 17 Sep 05 Posts: 125 Credit: 4,103,208 RAC: 204 |
We can add such a feature/preference in the next application update. It can be a scaling factor or something similar to what is described below that users can set in their project specific preferences. As long as it's adjustable to the WU Length each User is comfortable with then everybody should be happy ... :) |
FluffyChicken Send message Joined: 1 Nov 05 Posts: 1260 Credit: 369,635 RAC: 0 |
Something to think about... Would you add 'trickling' style reporting for a work unit ? As I see it now, if the work units get longer, then it'll take longer for you to get feedback. If you can 'trickle' back the results, say after 4 iterations, then you will not need to wait till after all 50 are done (for example). This means you can analyse quicker. ALSO a way to terminate these longer jobs (say a stop at next itteration type thing) if the results are no longer needed (you have enough information, they are bad, just for fun ;-)) how and if this is possible I have no idea ;) Personally I have no problem with the longer jobs myself :) Hope to try them out. Team mauisun.org |
Message boards :
Rosetta@home Science :
Feedback, .. bandwidth usage :-(
©2024 University of Washington
https://www.bakerlab.org