Message boards : Rosetta@home Science : Bakerlab update for 2012
Author | Message |
---|---|
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,675,695 RAC: 11,002 |
Hi All at the Bakerlab! Could we get a brief update of what the aims for 2012 on R@H are? As ever, I'm curious as to what's being worked on and I know there are plenty of others who are too... e.g. is it mainly methodology, model accuracy (RMSD?), efficiency, hardware support (e.g. SMP/GPU/large memory models etc), specific diseases or a combination of the above and other things? Also, is there a requirement for more compute power and if so is there a limit to the compute power that could realistically be utilised successfully by the lab? ta Danny |
Big_Bang Send message Joined: 10 Feb 10 Posts: 35 Credit: 51,915 RAC: 0 |
I'm very interested as well. What current research is going on? What are the plans for the next CASP this year? Compared to Folding@Home, there's very little communication from the Baker Lab imo. When I run Rosetta, I have no idea what I'm crunching... |
Aegis Maelstrom Send message Joined: 29 Oct 08 Posts: 61 Credit: 2,137,555 RAC: 0 |
Hi, and let me join you in the "give us more information" club. I think Rosetta misses its trains to gather more crunchers again and again. For instance, there has been an update of the software to 3.19 version - but we have not been informed what is new (new protocols? improved algorythms?) with it. Similarily both core projects as well as all the recent CASPs, side projects run by visiting scientists etc. lack of publicity and explanation. Rosetta is not the only neither the most credit rewarding game in town - I think more information would help to convince me - and many crunchers - to put more resources into R@h (and even generally into BOINC). Best regards from BOINC@Poland, a.m. |
robertmiles Send message Joined: 16 Jun 08 Posts: 1232 Credit: 14,281,662 RAC: 1,150 |
Hi All at the Bakerlab! They have already mentioned elsewhere that their current applications are not well designed for conversion to a GPU version - so many steps that must be run in a particular order, and not multiple steps at once, that running on a GPU could actually be slower than running on a CPU. The same problem also prevents getting much from conversion to a multi-threaded CPU version running on more than one CPU core at a time. This could change for some new application program based on a different algorithm (in other words, not rosetta or minirosetta) but such new application programs tend to appear years after the last previous new application. Their programs are memory-hungry enough that only the most expensive graphics boards would allow running more than one workunit at a time on a GPU, and they don't expect enough users with such boards to make the changes needed for doing that worthwhile any time soon. They haven't yet said if they need more computing power. However, I'd expect getting more computing power from users to help for about as long as they can expand the server capacity to match. If they decide that a large memory version is useful, I'd expect them to start by modifying their applications so they can be compiled separately for 32-bit mode and 64-bit mode, since 64-bit mode makes it easier to reach large amounts of memory. I've seen nothing on what the new 3.20 version is for, but my guess is that part of it is to repair the problem that often kept the graphics section from working in the 3.19 version. |
Mad_Max Send message Joined: 31 Dec 09 Posts: 209 Credit: 26,083,851 RAC: 16,607 |
Hi, Count me too. I and my team (I've been doing overviews and translations about situation of the project on the forum of my team) would like more information about current state and the plans for this year. And join to the question of computing power. We have formed the opinion that the current performance of the project are not limited by total computing power of crunchers but offline activity in the laboratory - the speed with which people can deal with treatment of the calculated results, to synthesize proteins modeled in vitro(in tube), testing of their interaction, conduct crystallography, etc. And eventually write scientific publications. Or are we wrong? |
Mad_Max Send message Joined: 31 Dec 09 Posts: 209 Credit: 26,083,851 RAC: 16,607 |
Yes, I think so too. But it is guess only. And information about version 3.19 also has not been published(only date of version change on main page), so that the difference between 3.19 and 3.17(and from the earlier - last official update and tread on version 3.14) is not known. |
Rocco Moretti Send message Joined: 18 May 10 Posts: 66 Credit: 585,745 RAC: 0 |
While I can't speak for the others working with Rosetta@home(*), my focus is on looking at protein-small molecule interactions (small molecules like drugs or enzyme substrates). One thing I hope to work more on in 2012 is a project to evaluate what we need to do to Rosetta to improve the prediction of protein-small molecule interactions. Where Rosetta@home comes into play with this is running simulations of many different protein-small molecule pairs under multiple different conditions, to see which conditions produce the best results. I already have some initial runs done on Rosetta@home (Thank you!), but Mad_Max's impressions are a fair assessment. The bottleneck on most projects is post-processing and evaluating the results which come back from Rosetta@home, and figuring out what the best next step is. Unfortunately, this isn't something that you can just throw more compute power at to fix. As JonazzDJ mentions, the big Rosetta@home news for 2012 is likely to be CASP10 (scheduled to start in April). While I'm not personally involved with CASP, the general philosophy and preliminary plans floating around the lab matches the gist of Mad_Max's impression - the current limits in making good CASP results aren't the lack of computing power, but rather limits in setting up runs (e.g. picking starting points for the simulations) as well as limits in picking the "best" structure from the results that come back. From what I understand, there are a number of protocol improvements that have been made, but how much they'll affect Rosetta@home usage, I can't say. (*) One thing that I think might not be fully appreciated is the diversity of projects being run on Rosetta@home. There's structure prediction, protein-protein docking, protein-protein design, protein-small molecule docking, enzyme design, ... . All of these have related components, but there's quite a bit of difference in how they're implemented and approached, and what the goals are for each. It's a bit different from some other boinc projects where there's a single goal with a single methodological hammer, and the difference between work units is which nail you're pounding in. |
Mark Rush Send message Joined: 6 Oct 05 Posts: 13 Credit: 52,262,787 RAC: 8,565 |
Rocco: Thanks very much for your post. Your post was really interesting. It would be incredibly nice if each of the different people working with Rosetta took 25 minutes and posted a very brief description of their work. I expect that if this was done on a semi-regular basis, it would increase the enthusiasm and motivation of us, the volunteers doing the crunching. And that might keep more people crunching, which I presume is a good thing. Mark |
robertmiles Send message Joined: 16 Jun 08 Posts: 1232 Credit: 14,281,662 RAC: 1,150 |
(*) One thing that I think might not be fully appreciated is the diversity of projects being run on Rosetta@home. There's structure prediction, protein-protein docking, protein-protein design, protein-small molecule docking, enzyme design, ... . All of these have related components, but there's quite a bit of difference in how they're implemented and approached, and what the goals are for each. It's a bit different from some other boinc projects where there's a single goal with a single methodological hammer, and the difference between work units is which nail you're pounding in. Rocco: How about posting how to identify which of those various projects each workunit is for? |
Big_Bang Send message Joined: 10 Feb 10 Posts: 35 Credit: 51,915 RAC: 0 |
[quote(*) One thing that I think might not be fully appreciated is the diversity of projects being run on Rosetta@home. There's structure prediction, protein-protein docking, protein-protein design, protein-small molecule docking, enzyme design, ... . All of these have related components, but there's quite a bit of difference in how they're implemented and approached, and what the goals are for each. It's a bit different from some other boinc projects where there's a single goal with a single methodological hammer, and the difference between work units is which nail you're pounding in. [/quote] Perhaps. The downside to this is that you can't achieve many things in the short term. The few resources you have are distributed over various projects. But I think this is good. Rosetta isn't tackling one particular disease, proteins are linked to TONS of diseases. So it takes a long, perhaps boring time (for us crunchers at least) to create the fundaments for various projects. Just look at Folding@Home. After 11 years, they are finally able to develop methods which can be used for diseases related research (especially Alzheimers and also the flu on a lesser scale). I can't wait to see how Rosetta will evolve in the coming years! |
Michael Gould Send message Joined: 3 Feb 10 Posts: 39 Credit: 15,440,990 RAC: 4,348 |
A huge thanks to all the project people who have recently given us a window into the applied science side of things. Incredibly exciting! Baker et al keep up the great work, and good luck! |
Kenneth DePrizio Send message Joined: 15 Jul 07 Posts: 15 Credit: 3,123,915 RAC: 0 |
Yes, I too would like to thank you for the recent posts regarding what's being worked on at Rosetta. It's nice to know what exactly my computer is contributing to with all these workunits being crunched. |
Message boards :
Rosetta@home Science :
Bakerlab update for 2012
©2024 University of Washington
https://www.bakerlab.org