Problems and Technical Issues with Rosetta@home

Message boards : Number crunching : Problems and Technical Issues with Rosetta@home

To post messages, you must log in.

Previous · 1 . . . 144 · 145 · 146 · 147 · 148 · 149 · 150 . . . 309 · Next

AuthorMessage
Falconet

Send message
Joined: 9 Mar 09
Posts: 354
Credit: 1,276,393
RAC: 2,018
Message 103760 - Posted: 7 Dec 2021, 20:08:46 UTC - in response to Message 103758.  
Last modified: 7 Dec 2021, 20:09:44 UTC

Looks like another batch of 4.2 work is available.

NKG2D, EPHA2 and BCMA. Reading on all those 3, I guess this work could be cancer-related (I'm assuming this was sent back by the same person/team and that they are all related). Or not at all.



Looks like there are an extra 1.4 million queued tasks when compared to this morning's 2.2 million Pythons.
ID: 103760 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5691
Credit: 5,859,226
RAC: 0
Message 103761 - Posted: 7 Dec 2021, 20:44:30 UTC - in response to Message 103754.  
Last modified: 7 Dec 2021, 20:47:25 UTC



One of these things is not like the other:



"Virtualbox is not installed." That's it. Nothing about why it is needed; why it should be installed; the consequences of not installing it. No links about how to do it. Nothing about equipment or OS requirements.

The entitlement and arrogance is astonishing. We're only worth the time for four words.

I'm giving up on Rosetta. I'll leave it as a project in case they ever decide to go back, but it's going to be a 1 percenter with SiDock getting the hog's share of resources.



You don't need the latest Vbox because it does not work well with Python, so goto this link and download 5.2.x and after that is installed then install the extension pack.
https://www.virtualbox.org/wiki/Download_Old_Builds_5_2
But I should warn you, these python tasks are massive.
The VDI is 9 gigs and will take 10 minutes to download.
The RAM requirement per TASK is 7629.39 real RAM and 1003.xx virtual.
You can set your run time to whatever you want, keep the 8 or switch to 6.
They generate between 126 and 180+ in credits on 6hrs.

We said that 4.2 was non Vbox work and that is hit and miss...mostly miss.
So Vbox is the only thing that is available now as a long term project here at RAH


I appreciate your replies and your guidance. I'm not going to install and set up that software on my computers just to run Rosetta. I'll move on to SiDock or other projects instead.



If you hurry, you can get a bit of the action in 4.2. 27,000+ queued up at the moment.
122,000 already processed. Only 1967 people on 4.2 at the moment.
That leaves 13 tasks per person to crunch.

As for the protein or bio science (non Vbox), then its just FAH, SiDock and WCG outside of the odds and ends here. TACC is a serious hit and miss program. You can go for weeks without getting any work from them. Not worth the time or electricity. It's just stuff they can't run on their supercomputer. I personally do not know of any other projects in protein folding that have work for PC's.
ID: 103761 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sid Celery

Send message
Joined: 11 Feb 08
Posts: 2140
Credit: 41,518,559
RAC: 10,612
Message 103763 - Posted: 7 Dec 2021, 22:27:06 UTC - in response to Message 103760.  

Looks like another batch of 4.2 work is available.

NKG2D, EPHA2 and BCMA. Reading on all those 3, I guess this work could be cancer-related (I'm assuming this was sent back by the same person/team and that they are all related). Or not at all.

Looks like there are an extra 1.4 million queued tasks when compared to this morning's 2.2 million Pythons.

Yes, but all seem limited to 100 decoys before stopping short, so only running between 1 & 2hrs here.

They won't last long at all
ID: 103763 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jean-David Beyer

Send message
Joined: 2 Nov 05
Posts: 195
Credit: 6,613,600
RAC: 6,755
Message 103764 - Posted: 7 Dec 2021, 22:27:29 UTC - in response to Message 103758.  

Looks like another batch of 4.2 work is available.


Yes it does:
Tue 07 Dec 2021 03:10:29 PM EST | Rosetta@home | Sending scheduler request: To fetch work.
Tue 07 Dec 2021 03:10:29 PM EST | Rosetta@home | Requesting new tasks for CPU
Tue 07 Dec 2021 03:10:33 PM EST | Rosetta@home | Scheduler request completed: got 9 new tasks


 1938180 Dec  7 15:10 epha2_site1_3skj_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_2pa9fe3l.zip
 1943095 Dec  7 15:10 epha2_site1_3skj_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_6gv2zz0m.zip
 1996781 Dec  7 15:10 epha2_site1_3skj_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_8yf2ct8i.zip
 1878391 Dec  7 15:10 epha2_site3_3c8x_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_5rx2cb2v.zip
 1990560 Dec  7 15:10 epha2_site3_3c8x_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_6mw9rn0b.zip
 2165492 Dec  7 15:10 niv_site2_6pd4_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_3me3qk7w.zip
 1953078 Dec  7 15:10 niv_site2_6pd4_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_9wm2tx8g.zip
 1923515 Dec  7 15:10 nkg2d_site1_4s0u_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_1zj8yf9y.zip
 2256771 Dec  7 15:10 nkg2d_site1_4s0u_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_7lu6db8g.zip

ID: 103764 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5691
Credit: 5,859,226
RAC: 0
Message 103765 - Posted: 7 Dec 2021, 23:34:49 UTC - in response to Message 103763.  
Last modified: 7 Dec 2021, 23:39:02 UTC

Looks like another batch of 4.2 work is available.

NKG2D, EPHA2 and BCMA. Reading on all those 3, I guess this work could be cancer-related (I'm assuming this was sent back by the same person/team and that they are all related). Or not at all.

Looks like there are an extra 1.4 million queued tasks when compared to this morning's 2.2 million Pythons.

Yes, but all seem limited to 100 decoys before stopping short, so only running between 1 & 2hrs here.

They won't last long at all



An hour or so ago there was just 27K now its up to 29K but the total systems went up. So the average drops to 8 tasks per system.

Run time to finish is just around 2hrs even though it shows 8 hrs when running.
ID: 103765 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5691
Credit: 5,859,226
RAC: 0
Message 103766 - Posted: 7 Dec 2021, 23:36:07 UTC - in response to Message 103764.  

Looks like another batch of 4.2 work is available.


Yes it does:
Tue 07 Dec 2021 03:10:29 PM EST | Rosetta@home | Sending scheduler request: To fetch work.
Tue 07 Dec 2021 03:10:29 PM EST | Rosetta@home | Requesting new tasks for CPU
Tue 07 Dec 2021 03:10:33 PM EST | Rosetta@home | Scheduler request completed: got 9 new tasks


 1938180 Dec  7 15:10 epha2_site1_3skj_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_2pa9fe3l.zip
 1943095 Dec  7 15:10 epha2_site1_3skj_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_6gv2zz0m.zip
 1996781 Dec  7 15:10 epha2_site1_3skj_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_8yf2ct8i.zip
 1878391 Dec  7 15:10 epha2_site3_3c8x_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_5rx2cb2v.zip
 1990560 Dec  7 15:10 epha2_site3_3c8x_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_6mw9rn0b.zip
 2165492 Dec  7 15:10 niv_site2_6pd4_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_3me3qk7w.zip
 1953078 Dec  7 15:10 niv_site2_6pd4_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_9wm2tx8g.zip
 1923515 Dec  7 15:10 nkg2d_site1_4s0u_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_1zj8yf9y.zip
 2256771 Dec  7 15:10 nkg2d_site1_4s0u_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_7lu6db8g.zip



I still have to keep my project_max in play for when I go back to python.
I don't have time to enable app_config and remove it and put it back all the time.
ID: 103766 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jean-David Beyer

Send message
Joined: 2 Nov 05
Posts: 195
Credit: 6,613,600
RAC: 6,755
Message 103767 - Posted: 8 Dec 2021, 2:50:10 UTC - in response to Message 103765.  

Yes, but all seem limited to 100 decoys before stopping short, so only running between 1 & 2hrs here.


Here too. All mine hit 100 decoys.
ID: 103767 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Falconet

Send message
Joined: 9 Mar 09
Posts: 354
Credit: 1,276,393
RAC: 2,018
Message 103771 - Posted: 8 Dec 2021, 10:12:35 UTC

Yes, 100 decoys. Mine just took much longer because they are running on my AMD 2500u laptop.
ID: 103771 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5691
Credit: 5,859,226
RAC: 0
Message 103777 - Posted: 8 Dec 2021, 19:43:18 UTC - in response to Message 103771.  

Yes, 100 decoys. Mine just took much longer because they are running on my AMD 2500u laptop.



DONE :: 100 starting structures 7075.52 cpu seconds
This process generated 100 decoys from 100 attempts

196 minutes used
ID: 103777 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
MStenholm

Send message
Joined: 18 Apr 20
Posts: 18
Credit: 26,577,630
RAC: 19,732
Message 103782 - Posted: 8 Dec 2021, 19:58:47 UTC - in response to Message 103777.  
Last modified: 8 Dec 2021, 20:00:37 UTC

Most countries have 60 minutes per hour and 60 seconds per minute but according to your calculation Belgium have 36 minutes per hour :). I know what you are doing but you have a long live ahead of you and someone ought to snap you out of that misconception.

7075/(60*60) = 1,96 hours, not 196 minutes. 1,96 hour = 1 hour and 0,96*60/100 = 57.6 minutes
60+57.6 =117,6 minutes
ID: 103782 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5691
Credit: 5,859,226
RAC: 0
Message 103784 - Posted: 9 Dec 2021, 16:11:14 UTC - in response to Message 103782.  

Most countries have 60 minutes per hour and 60 seconds per minute but according to your calculation Belgium have 36 minutes per hour :). I know what you are doing but you have a long live ahead of you and someone ought to snap you out of that misconception.

7075/(60*60) = 1,96 hours, not 196 minutes. 1,96 hour = 1 hour and 0,96*60/100 = 57.6 minutes
60+57.6 =117,6 minutes



Middle of the night math! LMAO.
Thanks for the correction.
ID: 103784 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5691
Credit: 5,859,226
RAC: 0
Message 103785 - Posted: 9 Dec 2021, 16:12:46 UTC

Something different...I have been getting some 4.2 stuff and now I am back to Python.
Anyone figure out the scheduler routine yet?
If I was to up the cores in the app_config (project_max_concurrent) to 3 or 4, would I get all Python or a blend of 4.2 and Python?
ID: 103785 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile robertmiles

Send message
Joined: 16 Jun 08
Posts: 1233
Credit: 14,338,560
RAC: 2,014
Message 103786 - Posted: 9 Dec 2021, 19:08:47 UTC - in response to Message 103785.  

Something different...I have been getting some 4.2 stuff and now I am back to Python.
Anyone figure out the scheduler routine yet?
If I was to up the cores in the app_config (project_max_concurrent) to 3 or 4, would I get all Python or a blend of 4.2 and Python?

It seems to be a random mix of whatever of available, rather than a schedule. I suspect that it is whatever they are currently teaching their students to create workunits for.
ID: 103786 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jean-David Beyer

Send message
Joined: 2 Nov 05
Posts: 195
Credit: 6,613,600
RAC: 6,755
Message 103787 - Posted: 9 Dec 2021, 19:12:32 UTC - in response to Message 103785.  


Something different...I have been getting some 4.2 stuff and now I am back to Python.
Anyone figure out the scheduler routine yet?
If I was to up the cores in the app_config (project_max_concurrent) to 3 or 4, would I get all Python or a blend of 4.2 and Python?


I have the following in my app_config file:

[/var/lib/boinc/projects/boinc.bakerlab.org_rosetta]$ cat app_config.xml
<app_config>
<project_max_concurrent>3</project_max_concurrent>
</app_config>

and the boinc-cllient runs up to three of these work units at a time. I am getting plenty of work units. I have a 16-core machine but the boinc-client is instructed to run no more than 8 Boinc tasks at a time. The various app_config.xml files allow up to 4 ClimatePrediction tasks at a time, 5 WCG at a time, three Rosetta at a time, and 2 Universe tasks at a time -- though the priority of the Universe tasks is so low that they hardly ever run, and then only one at a time.
Currently I have these Rosetta tasks on my machine and none of them are running.

3443919 Dec  9 13:09 fcgr3a_site1_5mn2_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_7ts1jp9r.zip
3453466 Dec  9 13:09 fcgr3a_site1_5mn2_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_9zi4uu3j.zip
2646638 Dec  9 13:09 her2_site4_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_0ic4nn1s.zip
2544075 Dec  9 13:09 her2_site4_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_6kw0cf1j.zip
2804583 Dec  9 13:09 niv_site3_6vy5_jhr_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_2fs1jz6r.zip
2585084 Dec  9 13:09 niv_site3_6vy6_jhr_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_2cm7fe4w.zip
2748992 Dec  9 13:09 niv_site3_6vy6_jhr_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_2jg7fo9z.zip
2640314 Dec  9 13:09 niv_site3_6vy6_jhr_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_7jp7ej6d.zip
2575060 Dec  9 13:09 niv_site3_6vy6_jhr_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_8yo1qr6j.zip


This is when I have been receiving work units for Rosetta:

Tue 07 Dec 2021 03:10:29 PM EST | Rosetta@home | Sending scheduler request: To fetch work.
Tue 07 Dec 2021 03:10:29 PM EST | Rosetta@home | Requesting new tasks for CPU
Tue 07 Dec 2021 03:10:33 PM EST | Rosetta@home | Scheduler request completed: got 9 new tasks
Wed 08 Dec 2021 12:30:55 AM EST | Rosetta@home | Sending scheduler request: To fetch work.
Wed 08 Dec 2021 12:30:55 AM EST | Rosetta@home | Requesting new tasks for CPU
Wed 08 Dec 2021 12:30:59 AM EST | Rosetta@home | Scheduler request completed: got 8 new tasks
Wed 08 Dec 2021 09:52:40 AM EST | Rosetta@home | Sending scheduler request: To fetch work.
Wed 08 Dec 2021 09:52:40 AM EST | Rosetta@home | Reporting 2 completed tasks
Wed 08 Dec 2021 09:52:40 AM EST | Rosetta@home | Requesting new tasks for CPU
Wed 08 Dec 2021 09:52:42 AM EST | Rosetta@home | Scheduler request completed: got 8 new tasks
Wed 08 Dec 2021 10:55:08 PM EST | Rosetta@home | Sending scheduler request: To fetch work.
Wed 08 Dec 2021 10:55:08 PM EST | Rosetta@home | Reporting 3 completed tasks
Wed 08 Dec 2021 10:55:08 PM EST | Rosetta@home | Requesting new tasks for CPU
Wed 08 Dec 2021 10:55:16 PM EST | Rosetta@home | Scheduler request completed: got 9 new tasks


I do not know about python tasks because I do not get them because I cannot run them.
ID: 103787 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5691
Credit: 5,859,226
RAC: 0
Message 103788 - Posted: 9 Dec 2021, 23:10:02 UTC - in response to Message 103786.  

Something different...I have been getting some 4.2 stuff and now I am back to Python.
Anyone figure out the scheduler routine yet?
If I was to up the cores in the app_config (project_max_concurrent) to 3 or 4, would I get all Python or a blend of 4.2 and Python?

It seems to be a random mix of whatever of available, rather than a schedule. I suspect that it is whatever they are currently teaching their students to create workunits for.



I think you missed out at what I am asking.
I get both 4.2 and Python.
However, I had to limit RAH to 2 tasks at a time because while 4.2 runs just fine and takes very little memory and processes very fast, Python is a memory hog and if it runs 3 it kills my other projects ability to run seamlessly.

So I was wondering if anyone has seen a pattern from the two schedulers, does it do 4.2 for a time and then go back to Python or does it also go 1:1 or whatever with python and 4.2 at the same time? Or is it just a random mix of the two?
ID: 103788 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5691
Credit: 5,859,226
RAC: 0
Message 103789 - Posted: 9 Dec 2021, 23:14:41 UTC - in response to Message 103787.  
Last modified: 9 Dec 2021, 23:15:12 UTC


Something different...I have been getting some 4.2 stuff and now I am back to Python.
Anyone figure out the scheduler routine yet?
If I was to up the cores in the app_config (project_max_concurrent) to 3 or 4, would I get all Python or a blend of 4.2 and Python?


I have the following in my app_config file:

[/var/lib/boinc/projects/boinc.bakerlab.org_rosetta]$ cat app_config.xml
<app_config>
<project_max_concurrent>3</project_max_concurrent>
</app_config>

and the boinc-cllient runs up to three of these work units at a time. I am getting plenty of work units. I have a 16-core machine but the boinc-client is instructed to run no more than 8 Boinc tasks at a time. The various app_config.xml files allow up to 4 ClimatePrediction tasks at a time, 5 WCG at a time, three Rosetta at a time, and 2 Universe tasks at a time -- though the priority of the Universe tasks is so low that they hardly ever run, and then only one at a time.
Currently I have these Rosetta tasks on my machine and none of them are running.

3443919 Dec  9 13:09 fcgr3a_site1_5mn2_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_7ts1jp9r.zip
3453466 Dec  9 13:09 fcgr3a_site1_5mn2_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_9zi4uu3j.zip
2646638 Dec  9 13:09 her2_site4_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_0ic4nn1s.zip
2544075 Dec  9 13:09 her2_site4_3h_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_6kw0cf1j.zip
2804583 Dec  9 13:09 niv_site3_6vy5_jhr_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_2fs1jz6r.zip
2585084 Dec  9 13:09 niv_site3_6vy6_jhr_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_2cm7fe4w.zip
2748992 Dec  9 13:09 niv_site3_6vy6_jhr_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_2jg7fo9z.zip
2640314 Dec  9 13:09 niv_site3_6vy6_jhr_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_7jp7ej6d.zip
2575060 Dec  9 13:09 niv_site3_6vy6_jhr_ggraft_1_SAVE_ALL_OUT_IGNORE_THE_REST_8yo1qr6j.zip


This is when I have been receiving work units for Rosetta:

Tue 07 Dec 2021 03:10:29 PM EST | Rosetta@home | Sending scheduler request: To fetch work.
Tue 07 Dec 2021 03:10:29 PM EST | Rosetta@home | Requesting new tasks for CPU
Tue 07 Dec 2021 03:10:33 PM EST | Rosetta@home | Scheduler request completed: got 9 new tasks
Wed 08 Dec 2021 12:30:55 AM EST | Rosetta@home | Sending scheduler request: To fetch work.
Wed 08 Dec 2021 12:30:55 AM EST | Rosetta@home | Requesting new tasks for CPU
Wed 08 Dec 2021 12:30:59 AM EST | Rosetta@home | Scheduler request completed: got 8 new tasks
Wed 08 Dec 2021 09:52:40 AM EST | Rosetta@home | Sending scheduler request: To fetch work.
Wed 08 Dec 2021 09:52:40 AM EST | Rosetta@home | Reporting 2 completed tasks
Wed 08 Dec 2021 09:52:40 AM EST | Rosetta@home | Requesting new tasks for CPU
Wed 08 Dec 2021 09:52:42 AM EST | Rosetta@home | Scheduler request completed: got 8 new tasks
Wed 08 Dec 2021 10:55:08 PM EST | Rosetta@home | Sending scheduler request: To fetch work.
Wed 08 Dec 2021 10:55:08 PM EST | Rosetta@home | Reporting 3 completed tasks
Wed 08 Dec 2021 10:55:08 PM EST | Rosetta@home | Requesting new tasks for CPU
Wed 08 Dec 2021 10:55:16 PM EST | Rosetta@home | Scheduler request completed: got 9 new tasks


I do not know about python tasks because I do not get them because I cannot run them.



Before I got interested in python I let RAH take whatever it wanted.
Then when I got into Python and letting it do its thing, then it was 3 tasks and taking up over half the memory and stalling other tasks. Then I took your project_concurrent and limited it to 2. But I see "name" does not affect anything. Project is project. So RAH is limited to 2 tasks no matter which application is running. I am afraid to release it, because then Python will take over. Unless that was a matter of RAH debt when 4.2 ran out. All I know is that Python x3 is a system killer unless I buy some new memory sticks. Then we are talking an absurd amount of memory to keep BOINC running the way I like it to run or dropping Python which seems crazy after complaining about not getting it for so long.

Why can't you run Python? I forgot.
ID: 103789 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
.clair.

Send message
Joined: 2 Jan 07
Posts: 274
Credit: 26,399,595
RAC: 0
Message 103790 - Posted: 10 Dec 2021, 2:19:44 UTC - in response to Message 103789.  
Last modified: 10 Dec 2021, 2:34:05 UTC

All I know is that Python is a system killer unless I buy some new memory sticks.

I don't know what motherboard you have, but don't scimp on it, you may have to max out what the board can take.
................
Recently because of going to some dodgy website an downloading a new version of boinc mangler 7,16.20 that had `virtual pox` in with it. {its not had VB before}
I get my computer infected with pythons in cages, and they are horrible big things that realy take over the poor thing.
I decided to upgrade one my systems from 32GB to 128GB {not its max}
it`s `only` a twin xeon E5 2697 48 thread 128GB of ram and they can realy stuff it. even if actual memory use is 50GB and only run 16 at a time. it'll run plenty more R4.20, [ and other work]
further up this thread I had problems with another system getting daft disk space messages that I now ignore, {and I had just upped that from 16 to 32 GB}
now even this one gets them,
Just now, rosetta is using 77GB of disk space and it has 246GB more free available for boinc to use.
.............
09-Dec-2021 06:40:46 [Rosetta@home] Rosetta needs 1907.35MB more disk space. You currently have 0.00 MB available and it needs 1907.35 MB.
09-Dec-2021 06:40:46 [Rosetta@home] rosetta python projects needs 19073.49MB more disk space. You currently have 0.00 MB available and it needs 19073.49 MB
............
crazy stuff I don't bother about any more
And I have rosetta set to a zero resource share on that system to try and limit what funky stuff it can do
ID: 103790 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sid Celery

Send message
Joined: 11 Feb 08
Posts: 2140
Credit: 41,518,559
RAC: 10,612
Message 103791 - Posted: 10 Dec 2021, 2:31:19 UTC - in response to Message 103763.  

Looks like another batch of 4.2 work is available.

NKG2D, EPHA2 and BCMA. Reading on all those 3, I guess this work could be cancer-related (I'm assuming this was sent back by the same person/team and that they are all related). Or not at all.

Looks like there are an extra 1.4 million queued tasks when compared to this morning's 2.2 million Pythons.

Yes, but all seem limited to 100 decoys before stopping short, so only running between 1 & 2hrs here.

They won't last long at all

I did get a few Robetta tasks that ran the full 8hrs but it looks like all Rosetta 4.20 tasks have been downloaded now.
Won't be long before I'm back running WCG

My resource share is set to Rosetta 29 - WCG 1 but my RAC is 6.5k to 19.6k atm... <sigh>
ID: 103791 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jean-David Beyer

Send message
Joined: 2 Nov 05
Posts: 195
Credit: 6,613,600
RAC: 6,755
Message 103792 - Posted: 10 Dec 2021, 4:11:11 UTC - in response to Message 103789.  

Why can't you run Python? I forgot.


I do not have VirtualBox, so I cannot run them.
ID: 103792 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5691
Credit: 5,859,226
RAC: 0
Message 103793 - Posted: 10 Dec 2021, 7:01:12 UTC - in response to Message 103792.  

Why can't you run Python? I forgot.


I do not have VirtualBox, so I cannot run them.



Do you want to run them?
Then download virtual box 5 not 6.
And enable the option to get them on your profile on this page: https://boinc.bakerlab.org/rosetta/show_host_detail.php?hostid=5958977 (Thats your windows machine)
But they are memory hogs (can only do 3 tasks) plus the VDI is 9 gigs.
ID: 103793 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Previous · 1 . . . 144 · 145 · 146 · 147 · 148 · 149 · 150 . . . 309 · Next

Message boards : Number crunching : Problems and Technical Issues with Rosetta@home



©2024 University of Washington
https://www.bakerlab.org