Message boards : GPU Users Group message board : Need help on the applications
Author | Message |
---|---|
Keith Myers Send message Joined: 29 Mar 20 Posts: 97 Credit: 332,619 RAC: 7 |
Please read Message https://boinc.bakerlab.org/rosetta/forum_thread.php?id=13646&postid=92508 I don't see any place to set the sub-projects that I want to run or should be running on my compatible hardware. Do I need to write an anonymous app_info configuration to get rid of the 32 bit i686 application and tasks that Rosetta is sending me? I was about to edit out that platform in the client_state.xml file but thought I should ask first what everyone else is doing and ask if maybe they have a already configured app_info I could copy. |
Buckeye4lf Send message Joined: 29 Aug 08 Posts: 43 Credit: 8,596,164 RAC: 543 |
Please read Message https://boinc.bakerlab.org/rosetta/forum_thread.php?id=13646&postid=92508 I am running the stock apps that the project pushes.....it would be nice to get something special. We should get Juan over here:) I also have an AMD and now that I am looking, my run times are longer than my target time, much longer. |
Tom M Send message Joined: 20 Jun 17 Posts: 97 Credit: 16,726,096 RAC: 30,059 |
Keith, You may be experiencing the "new to project" effect where the scheduler sends out a test of all the apps. I don't think I have gotten any 32-bit apps in a long while. But I have been on this project off and on for a while. So my results maybe skewed. Tom |
Keith Myers Send message Joined: 29 Mar 20 Posts: 97 Credit: 332,619 RAC: 7 |
Hi Tom, I thought that might be the case but didn't find but one post in the forums about the 4.07 app. But I saw the majority of people running the 4.08 applications. I was thinking of editing the client state file to remove the 32 bit apps and writing an app_info. Guess I should just let the project settle on hopefully the 64 bit app. |
Buckeye4lf Send message Joined: 29 Aug 08 Posts: 43 Credit: 8,596,164 RAC: 543 |
I checked some of my completed jobs and they are all x64. I have had this project running for a while though.... Let's get Juan and Ville over here to start working these apps:) |
juan BFP Send message Joined: 26 Dec 08 Posts: 5 Credit: 322,924 RAC: 0 |
I'm here Hi to all But i can get any work. :( |
Keith Myers Send message Joined: 29 Mar 20 Posts: 97 Credit: 332,619 RAC: 7 |
Now that the project is back. . . . . . hiya Juan! The project ran out of work this morning it seems. I am still crunching because my first scheduler connection to the project sent me 260 tasks. Way to much to finish in 5 days. All my tasks are in "high priority" mode. Majority will time out and get resent. |
Keith Myers Send message Joined: 29 Mar 20 Posts: 97 Credit: 332,619 RAC: 7 |
I aborted all my work. Wouldn't have finished in time anyway. Will wait for the next app version beyond the current 4.12. |
Buckeye4lf Send message Joined: 29 Aug 08 Posts: 43 Credit: 8,596,164 RAC: 543 |
I aborted all my work. Wouldn't have finished in time anyway. Will wait for the next app version beyond the current 4.12. I was reading in the forums to not really use a buffer for this project. Their deadlines are only two days from download, so they can analyze results and make updates to path forward. Very different from SETI where they still have not looked at results from a decade ago. If I were you and you still wanted to run Rosetta, return settings to 0.1 days of work with no buffer. |
Buckeye4lf Send message Joined: 29 Aug 08 Posts: 43 Credit: 8,596,164 RAC: 543 |
I'm here Hi to all They are forcing larger target completion times to help with this problem. I set my target computation time to 24 hours, I then get downloads for all cpu cores that will run for 24 hours, well under 2 day deadline. There is not advantage to getting the short duration jobs. |
Keith Myers Send message Joined: 29 Mar 20 Posts: 97 Credit: 332,619 RAC: 7 |
I aborted all my work. Wouldn't have finished in time anyway. Will wait for the next app version beyond the current 4.12. I didn't have a choice. I got 260 tasks on the first connection to the project after joining. It used an inherited global cache setting from the last project changed which was Einstein. So before I could even configure the project cache settings it had already sent me more work than I could possibly have finished on the first download. |
Mr. Kevvy Send message Joined: 17 Sep 07 Posts: 2 Credit: 14,027,169 RAC: 102 |
This may help someone... I had forgotten about this parameter. I couldn't find any other reason why BOINC would do this on one machine and the below resolved the problem that running CPU-only Rosetta along with GPU-only jobs with a reserved core, even when Rosetta has zero resource share BOINC would decide to terminate a GPU task to start a Rosetta CPU task in its core. Grrr! Solution: Create an app_config.xml in the BOINC/projects/boinc.bakerlab.org_rosetta folder as follows: <app_config> <app_version> <app_name>rosetta</app_name> <project_max_concurrent>#</project_max_concurrent> </app_version> </app_config> replacing "#" with the maximum number of Rosetta tasks to run at once. Then in BOINC Options > Read config files.... not even necessary to restart. So for instance on a two GPU machine with six cores # would be 4 or less, then Rosetta can't take the GPU-task's core. |
Keith Myers Send message Joined: 29 Mar 20 Posts: 97 Credit: 332,619 RAC: 7 |
Yes, I have relied on max_concurrent and project_max_concurrent statements since the start of BOINC. Use them still. But the Pandora client also gives me very good control over exactly how many tasks I want to carry in my cache for every project. But since I have moved on to Universe for my primary cpu project, I only run the Nano SBC for now along with gpu crunching Einstein work. If I ever do come back here for work on the main PC's, I now can control how much work to carry. But as I said in my first OP, I didn't get a chance to have any controls in place as my initial project joining connection overloaded me with too much work right off the bat. |
Message boards :
GPU Users Group message board
: Need help on the applications
©2025 University of Washington
https://www.bakerlab.org