difficult target first

Message boards : Number crunching : difficult target first

To post messages, you must log in.

AuthorMessage
NewInCasp
Avatar

Send message
Joined: 12 May 06
Posts: 21
Credit: 5,229
RAC: 0
Message 16578 - Posted: 18 May 2006, 22:32:35 UTC

Hi, Is it possibe to que difficult target first in R@H. Yesterday I was checking few new targets in casp7 and found some of them are easiest!
ID: 16578 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Moderator9
Volunteer moderator

Send message
Joined: 22 Jan 06
Posts: 1014
Credit: 0
RAC: 0
Message 16595 - Posted: 19 May 2006, 1:15:36 UTC - in response to Message 16578.  

Hi, Is it possibe to que difficult target first in R@H. Yesterday I was checking few new targets in casp7 and found some of them are easiest!


The short answer is no. The system will work on them based on their deadlines. There really are no easy or hard ones. There are longer and shorter ones, but the difficulty for the computer is the same. The credits per hour is the same for all work units.

That said there is a small advantage in working on the larger ones depending on how you measure you credit scores. If you work on a lot of large proteins, it will take longer to make a single model. So the system will report these back less often, but claim a higher credit because they took more time to produce. This will raise your RAC scores. It will not affect your total credits. Some people like this little extra boost in their RAC.

I am advised that in the next version of the Application, the servers will begin a new distribution policy. The very largest of the work units will only be sent to systems with more than 1GB of installed memory. This is to help people with less memory.

Moderator9
ROSETTA@home FAQ
Moderator Contact
ID: 16595 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile rbpeake

Send message
Joined: 25 Sep 05
Posts: 168
Credit: 247,828
RAC: 0
Message 16603 - Posted: 19 May 2006, 3:10:40 UTC - in response to Message 16595.  

I am advised that in the next version of the Application, the servers will begin a new distribution policy. The very largest of the work units will only be sent to systems with more than 1GB of installed memory. This is to help people with less memory. [/color][/b]

I have a machine with 958MB of memory, and I do not use it for other difficult tasks, so it would be nice if I could somehow "opt-in" for the large memory jobs, although it falls below the 1GB minimum.

Regards,
Bob P.
ID: 16603 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Moderator9
Volunteer moderator

Send message
Joined: 22 Jan 06
Posts: 1014
Credit: 0
RAC: 0
Message 16605 - Posted: 19 May 2006, 3:43:05 UTC - in response to Message 16603.  

I am advised that in the next version of the Application, the servers will begin a new distribution policy. The very largest of the work units will only be sent to systems with more than 1GB of installed memory. This is to help people with less memory. [/color][/b]

I have a machine with 958MB of memory, and I do not use it for other difficult tasks, so it would be nice if I could somehow "opt-in" for the large memory jobs, although it falls below the 1GB minimum.



I don't know how tight they will be on this. They really should not use the BOINC memory figures because they are not correct. I also have some machines that are running without any problem and they fall below the minimum. I want to convince then to make this a user selectable parameter. There will be an announcement in the announcements thread for what ever they do.

Moderator9
ROSETTA@home FAQ
Moderator Contact
ID: 16605 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
senatoralex85

Send message
Joined: 27 Sep 05
Posts: 66
Credit: 169,644
RAC: 0
Message 16619 - Posted: 19 May 2006, 6:08:50 UTC - in response to Message 16605.  

I don't know how tight they will be on this. They really should not use the BOINC memory figures because they are not correct. I also have some machines that are running without any problem and they fall below the minimum. I want to convince then to make this a user selectable parameter. There will be an announcement in the announcements thread for what ever they do.[/color][/b][/quote]

I would also like to see this be available based on user preferance! Although my maching does not have 1GB of memory, it does have 800mHz Rambus RD Ram that came standard with Gateway at the time. It is pretty robust!
ID: 16619 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile rbpeake

Send message
Joined: 25 Sep 05
Posts: 168
Credit: 247,828
RAC: 0
Message 16628 - Posted: 19 May 2006, 10:56:58 UTC - in response to Message 16605.  
Last modified: 19 May 2006, 10:57:24 UTC

..I want to convince then to make this a user selectable parameter. There will be an announcement in the announcements thread for what ever they do.

Another factor to consider, I do not use the graphics because I run BOINC as a service, thus saving additional memory usage. So an arbitrary cut-off of 1GB of memory does not seem to make sense because there are other factors to consider. And from what I read on the Ralph boards, not everyone with less than 1GB was having issues with the larger workunits (and most of those with issues were running the graphics also).

Regards,
Bob P.
ID: 16628 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 16655 - Posted: 19 May 2006, 18:48:47 UTC

My single core, 1Gig Ram 2Ghz cpu hasn't had any problems with Ralph other than the constant scroll of out of work messages.. and a string of 5 WUs that were missing (fasta?) files. You're welcome to use it for the large proteins; and it's also a service install so there's no graphics memory usage.
ID: 16655 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Feet1st
Avatar

Send message
Joined: 30 Dec 05
Posts: 1755
Credit: 4,690,520
RAC: 0
Message 16658 - Posted: 19 May 2006, 19:40:03 UTC - in response to Message 16628.  

...not everyone with less than 1GB was having issues with the larger workunits (and most of those with issues were running the graphics also).

The point for any project like this is to make the experience as problem-free as possible for everyone involved. So they aren't looking for the level at which some systems will work... instead they must find the level at which ALL systems will work. There will still be work for <1GB systems to crunch on. They are trying to make the best use of the available resources, both via our PCs, and their developers and support volunteers.

Who knows better what they are running now, and what they plan to run in the future than the project team? And so who better to establish the guidelines?

I agree, they have taken the choice away from you, and it would be better if they allowed you to choose, and default things to the conservative decision for those that don't want to HAVE to choose. But it is significantly more difficult for them to allow you to opt-in, and add preferences and resolve them every time you connect, and to test all of that process. Bottom line is that they've found "too many" systems have problems with specific WUs and so they are taking steps so that those systems will no longer have such a problem. One step at a time. Maybe the memory refinements will continue and they'll be able to bring down even those larger WUs and it will become a moot point.

But there are many out there that do not meet the present 512MBs the project recommends... and those are often the ones posting with problems. And sometimes swearing about how other less memory intensive projects run just fine and so R@H is broken... yadda yadda... "I'm not going to run R@H anymore". So please try to assure you stay in-line with the goals of the greater project and not just one specific PC environment where you think you can get by with less than the project's recommendations. They aren't saying you are mistaken. They're just saying that too many people are likely to run in to problems, and so they're taking steps to avoid that.
Add this signature to your EMail:
Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might!
https://boinc.bakerlab.org/rosetta/
ID: 16658 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
tralala

Send message
Joined: 8 Apr 06
Posts: 376
Credit: 581,806
RAC: 0
Message 16689 - Posted: 20 May 2006, 10:39:02 UTC

The problem with the opt-in solution is that it is currently not supported by BOINC whereas the automatic distriution based on memory specification of the target host is. I don't know whether they have the time to change the code.

P.S.: I would suggest >=1024 MB or >1023 MB since 1 GB should be enough even for the larger ones and machines with 1 GB are commom whereas more than 1 GB is still the exception. But it depends on the test result on Ralph I guess.
ID: 16689 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Ethan
Volunteer moderator

Send message
Joined: 22 Aug 05
Posts: 286
Credit: 9,304,700
RAC: 0
Message 16705 - Posted: 20 May 2006, 15:04:15 UTC - in response to Message 16689.  

The problem with the opt-in solution is that it is currently not supported by BOINC whereas the automatic distriution based on memory specification of the target host is. I don't know whether they have the time to change the code.

P.S.: I would suggest >=1024 MB or >1023 MB since 1 GB should be enough even for the larger ones and machines with 1 GB are commom whereas more than 1 GB is still the exception. But it depends on the test result on Ralph I guess.



I'd suggest something like 950mb. Many people have integrated video which takes away from the total amount of memory visible to the OS.
ID: 16705 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile rbpeake

Send message
Joined: 25 Sep 05
Posts: 168
Credit: 247,828
RAC: 0
Message 16711 - Posted: 20 May 2006, 15:40:32 UTC - in response to Message 16705.  

I'd suggest something like 950mb. Many people have integrated video which takes away from the total amount of memory visible to the OS.

That is exactly my situation! Thanks for thinking of it.

Regards,
Bob P.
ID: 16711 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Moderator9
Volunteer moderator

Send message
Joined: 22 Jan 06
Posts: 1014
Credit: 0
RAC: 0
Message 16720 - Posted: 20 May 2006, 18:17:03 UTC - in response to Message 16711.  
Last modified: 20 May 2006, 18:19:16 UTC

I'd suggest something like 950mb. Many people have integrated video which takes away from the total amount of memory visible to the OS.

That is exactly my situation! Thanks for thinking of it.

There is new information on this issue. I already posted some where on this but I can't find it now so I will repeat it here.

The project has decided that there are a number of issues with the very largest of the work Units, that go beyond simple memory size. For one thing there is a memory "leak" occurring when these are run, which they have not solved yet. There are also major differences in the science being performed in those work units. So they have determined that these should be run in a different environment completely. In part this is because of the issues involved in separating the normal work unit science from the large work unit science. They are really quite different.

As a result, they will continue to reduce the memory footprint of the work units that are to be run on Rosetta, and the extremely large ones will be run in a different system. This eliminates the need for the change in work unit distribution control. Please don't shoot the messenger.

Moderator9
ROSETTA@home FAQ
Moderator Contact
ID: 16720 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
tralala

Send message
Joined: 8 Apr 06
Posts: 376
Credit: 581,806
RAC: 0
Message 16725 - Posted: 20 May 2006, 20:36:09 UTC - in response to Message 16720.  

Please don't shoot the messenger.

...which is tempting though. ;-) Well the message is clear there are some people out who would really like to help with their big machines for the more demanding WUs. The project team can consider asking for that any time, be it with big WUs here on Rosetta be it on Ralph or even a third BOINC-offspring. If they have another environment which can produce the science its okay as well. Perhaps the BOINC scheduler gets more sophisticated over time to allow better targeting of high-spec-machines with demanding WUs.

ID: 16725 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1829
Credit: 116,107,612
RAC: 66,667
Message 16871 - Posted: 22 May 2006, 21:05:10 UTC

although it may be irrelevant for this topic now, one thing I'd like to suggest is that if large jobs are being sent out then it might be a good idea to only send one big job per dual core/dual cpu computer if that's possible. One big job and one normal job would be a better proposition than two big jobs for most system's i'd have thought.
ID: 16871 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Feet1st
Avatar

Send message
Joined: 30 Dec 05
Posts: 1755
Credit: 4,690,520
RAC: 0
Message 16878 - Posted: 22 May 2006, 22:07:04 UTC - in response to Message 16871.  

...only send one big job per dual core/dual cpu computer if that's possible.

THAT one's going to be tough. BOINC's rules aren't that complicated I don't think. But if they can establish the needed memory "PER CPU", then hopefully that achieves the same objective.

At this point, as Mod9 said, the idea of heavy WUs is on hold. I think their original post on the subject was over on Ralph, which is why they couldn't find it.

Add this signature to your EMail:
Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might!
https://boinc.bakerlab.org/rosetta/
ID: 16878 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Lee Carre

Send message
Joined: 6 Oct 05
Posts: 96
Credit: 79,331
RAC: 0
Message 16923 - Posted: 23 May 2006, 17:38:58 UTC - in response to Message 16878.  
Last modified: 23 May 2006, 17:39:11 UTC

...only send one big job per dual core/dual cpu computer if that's possible.

THAT one's going to be tough. BOINC's rules aren't that complicated I don't think. But if they can establish the needed memory "PER CPU", then hopefully that achieves the same objective.

from my understanding of the system, you're correct in that it will be hard if not impossible to make such a specification
i don't even think they can specify RAM per CPU, only RAM per WU, this is yet another area in which BOINC is lacking :(
Want to search the BOINC Wiki, BOINCstats, or various BOINC forums from within firefox? Try the BOINC related Firefox Search Plugins
ID: 16923 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : difficult target first



©2024 University of Washington
https://www.bakerlab.org