Message boards : Number crunching : Large proteins
Author | Message |
---|---|
Tomcat雄猫 Send message Joined: 20 Dec 14 Posts: 180 Credit: 5,386,173 RAC: 0 |
Two questions: How much RAM will these tasks need? Will these tasks be run on Android devices? Thank you. |
Admin Project administrator Send message Joined: 1 Jul 05 Posts: 4805 Credit: 0 RAC: 0 |
These larger than usual jobs come from our Robetta structure prediction server. They may or may not be related to COVID-19. The spike protein which was modeled in late January and early February, and continues to be modeled (variants and such), is over 1000 residues, and modeled as a symmetric trimer. The structure has since been determined by Cryo-EM (6vsb, 6vxx, and 6vyb), the latter two from Dr. Veesler's lab at the University of Washington. If the sequence is larger than 1000 residues (2000 is the maximum for a single chain), then the memory bound is set to 4gigs so if your device has enough memory per cpu set to run, then it can run such jobs. |
Grant (SSSF) Send message Joined: 28 Mar 20 Posts: 1679 Credit: 17,803,499 RAC: 22,548 |
If the sequence is larger than 1000 residues (2000 is the maximum for a single chain), then the memory bound is set to 4gigs so if your device has enough memory per cpu set to run, then it can run such jobs.Many of the present systems have issues when only a couple of Tasks require 1.3GB of RAM. Will these larger RAM requirement Tasks be released as a batch, or (ideally), an intermittent stream of one at a time over many hours (depending how many there are of course). Reducing the chances of a system getting more than one to process at a time (i'm thinking a 4GB RAM requirement will make even just 1 of them unsuitable for any Pi system). Grant Darwin NT |
lakotamm Send message Joined: 28 Jun 19 Posts: 22 Credit: 171,192 RAC: 0 |
It would be great if there were still units for ARM64 devices with 0,95GB RAM. It seems like the community has found ways to run multiple WUs on these devices in combination with RAM compression. The word is still just getting spread though and I expect the computing power coming from these devices to only increase. That being said, my laptop with 24GB RAM is ready to take new 4GB WUs with ease. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,598,063 RAC: 8,840 |
If the sequence is larger than 1000 residues (2000 is the maximum for a single chain), then the memory bound is set to 4gigs so if your device has enough memory per cpu set to run, then it can run such jobs. Do you plan to insert "long wus" and "short wus" in user's profile (like others project do)?? |
Grant (SSSF) Send message Joined: 28 Mar 20 Posts: 1679 Credit: 17,803,499 RAC: 22,548 |
Do you plan to insert "long wus" and "short wus" in user's profile (like others project do)??In this case i think "large RAM requirement" and "small RAM requirement" might be the way to go. Grant Darwin NT |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,598,063 RAC: 8,840 |
Do you plan to insert "long wus" and "short wus" in user's profile (like others project do)??In this case i think "large RAM requirement" and "small RAM requirement" might be the way to go. Yeap. I think it's not important the definition of these wus, but if there will be the possibility to choose. |
Millenium Send message Joined: 20 Sep 05 Posts: 68 Credit: 184,283 RAC: 0 |
Thank you for the news, time to put these 16GB to use! |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
There will still be a mix of other WUs in addition to these large-protein WUs. If I understand the definition of WU memory bound, it means the 4GB tasks will not be sent to any machine where BOINC is configured to use less than 4GB of memory. Just like the existing WUs, they won't require their maximum memory very often. Crossing the "bound" during the run would cause the BOINC Manager to abort the task. So it is a number you shouldn't actually see happen often. But I wanted to get people thinking about how long-running, high-memory models will impact the work cache and the BOINC Manager's ability to plan work. To plan your WU runtime preference. Maybe even plan some new resource share to low-memory BOINC projects so the memory requirement of the typical mix of work is reduced. Rosetta Moderator: Mod.Sense |
Charles Dennett Send message Joined: 27 Sep 05 Posts: 102 Credit: 2,081,660 RAC: 566 |
Will there be a way to identify these long running tasks based on their name? -Charlie |
Jim1348 Send message Joined: 19 Jan 06 Posts: 881 Credit: 52,257,545 RAC: 0 |
Crossing the "bound" during the run would cause the BOINC Manager to abort the task. So it is a number you shouldn't actually see happen often. What I have seen in the past on my 12-core Ryzens with 16 GB of memory (11 cores on BOINC) is that if there is not enough memory, then the last WU just won't start up, and I then have only 10 running. If that is all that happens, no problem. If something crashes, then I think we need to make other arrangements. |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Yes the BOINC Manager will suspend one or more tasks as "waiting for memory" as necessary. Eventually the high memory task will be closer to the deadline than other tasks and BOINC will hold enough other threads to allow it to run. But it is based on actual memory requests during the run, not the 4GB memory bound defined for the WU. Rosetta Moderator: Mod.Sense |
WBT112 Send message Joined: 11 Dec 05 Posts: 11 Credit: 1,382,693 RAC: 0 |
Some work preferences would still be nice, assuming that a lot of people just want to crunch without looking into the preferences every few days. Maybe just send maximum 1-2 of these "monsters" to each user or let them decide manually if they want more. I don't want my CPU cores idle "waiting for memory" and I can't check all day long. |
Admin Project administrator Send message Joined: 1 Jul 05 Posts: 4805 Credit: 0 RAC: 0 |
I want to make sure people understand that the long running jobs may be a separate issue and not related. The 12v1n_ tasks are running into rare issues. We are looking into this. There were 2 issues. 1 was large proteins reaching the watchdog time limit before producing a model, and the other issue was with these cyclic peptide 12v1n_ jobs. The former has been dealt with hopefully by extending the watchdog time the latter is likely a protocol issue where for rare cases the modeling trajectory is continuing for a long time due to filtering criteria, but we need to look into this further. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,598,063 RAC: 8,840 |
I don't want my CPU cores idle "waiting for memory" and I can't check all day long. +1 |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,598,063 RAC: 8,840 |
There will still be a mix of other WUs in addition to these large-protein WUs. If I understand the definition of WU memory bound, it means the 4GB tasks will not be sent to any machine where BOINC is configured to use less than 4GB of memory. In which way the server scheduler can recognize the quantity of ram in a host? For example i have a pc with 24 core and 24 gb of ram. Up to now, not big problems with R@H. But if you release the "monster ram" wus, i will download only 6 wus and my machine will be full. Please, give us the possibility to decide. |
Admin Project administrator Send message Joined: 1 Jul 05 Posts: 4805 Credit: 0 RAC: 0 |
We may consider boosting credits for these jobs. |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Keep in mind that many of the WUs now are tagged as 2GB, and you are seeing them generally run in less than 1GB. Also keep in mind you are unlikely to pull 4 of them at the same time. I have started a thread to discuss fair credit for completing a monster. Rosetta Moderator: Mod.Sense |
Jim1348 Send message Joined: 19 Jan 06 Posts: 881 Credit: 52,257,545 RAC: 0 |
I don't want my CPU cores idle "waiting for memory" and I can't check all day long. With virtual cores, it is not much of a loss if you have one out of twelve idle, for example. You aren't losing a full core, only an instruction stream, and the hardware resources will still be in use. If you are losing two virtual cores, then that is equivalent of a full core. One defense (the best) is to get more memory. But if they need to get the science done, then the loss of cores for less important work is really their decision, and not a problem for me. EDIT: As was said, another way is to run a second project that requires less memory. I often use TN-Grid, a gene-expansion project that has some secondary implications for COVID-19 (among a lot of others). It requires only about 56 MB per WU. Just set it to maybe 10% resource share or less, and it will fill in automatically if you have a free core. |
Sid Celery Send message Joined: 11 Feb 08 Posts: 2122 Credit: 41,194,088 RAC: 9,858 |
Keep in mind that many of the WUs now are tagged as 2GB, and you are seeing them generally run in less than 1GB. Also keep in mind you are unlikely to pull 4 of them at the same time. I have started a thread to discuss fair credit for completing a monster. I probably haven't reached the thread announcing that something big has changed, but my laptop (4 cores - 8Gb RAM) was running 2 of the 121vN tasks simultaneously with another queued up to run next and I just spotted that the server had cancelled all three after an update. I'll catch up shortly |
Message boards :
Number crunching :
Large proteins
©2024 University of Washington
https://www.bakerlab.org