Posts by bkil

21) Message boards : Number crunching : Rosetta 4.1+ and 4.2+ (Message 95584)
Posted 30 Apr 2020 by bkil
Post:
Well, a 10 year old quad core machine with 4GB RAM isn't bad at all. Why aren't you running something nice on it like a recent version of Lubuntu (or Xubuntu)? It's free and pretty efficient on old hardware.

If your processor is from 10 years ago, it should definitely support 64-bit - this would enhance crunching speed as well. A quick test showed that on a 4GB machine, a 32-bit Puppy Linux could use 3.5GB RAM, while a 64-bit Puppy Linux could use all of it (running from RAM disk while at it). That's not a negligible difference at all.

I would definitely not connect an XP machine to a network (or to a power outlet for that matter). It's almost 20 years old now and has gone unmaintaned for years. You are begging to get hacked.

/OFF We're in the process of evaluating software to run on 15-20 year old hardware for charity donations and whether it would be usable and the outlook seems pretty good up until now.
22) Message boards : Number crunching : The most efficient cruncher rig possible (Message 95583)
Posted 30 Apr 2020 by bkil
Post:
If you would use the fastest zram compression possible (LZ4?), you may get 6GB out of a 4GB rPi4.

I've done a dirty patch myself on BOINC on a low-memory PC so it would schedule as much work as the zram'med memory amount would allow, instead of limiting to the physical amount. This solves the "waiting for memory" messages. I had to patch it because it caps memory allowance to be under 100% when requesting jobs, although, I guess after you have the jobs the scheduler may allow > 100% settings (TODO).

It has been working pretty good for weeks now.
23) Message boards : Number crunching : The most efficient cruncher rig possible (Message 95582)
Posted 30 Apr 2020 by bkil
Post:
Thanks for sharing.
See here how Recent Average Credit is computed:


You basically need to run your rigs for weeks to be comparable as you've rightly concluded that you can get batches of "lucky" (uncalibrated) work every once in a while.

For a more fair comparison, I've outlined above a protocol that could work better. TL;DR:
- Grab the WU command line from `ps -e f` (I think it is also included in a file)
- Stop BOINC
- Copy away the slot directory of a running WU
- Run the given command line for each core (preferably from separate directories)
- Check how many decoys it produces after 8 hours (sleep && kill) and/or plot the decoy production progress on a graph as per the state/log (may need pipe through a timestamping tool or polling)
- The executables are not compatible between a Pi and a PC, but the data files and parameters should be (TODO).

This could already give a hint, but to be even more fair, you would need to repeat this for each representative kind of WU (I think there are less than a dozen of them).

I'm not at all surprised that you find a legacy desktop computer much less efficient, this can be seen from the earlier numbers, but the right kind of legacy laptop is much more competitive with an rPi4, as they usually containing high efficiency ultra low power CPUs.

24) Message boards : Number crunching : Tells us your thoughts on granting credit for large protein, long-running tasks (Message 95567)
Posted 29 Apr 2020 by bkil
Post:
I think if you deploy a vast amount of nodes from fixed OS images that have the BOINC folder hard coded then a clashing host ID could result something like this.
25) Message boards : Number crunching : Running on a 4GB Raspberry Pi 4 - How to? (Message 95564)
Posted 29 Apr 2020 by bkil
Post:
That sounds nice, especially the default consumption of 3.5W. Could you perhaps plot a diagram with each attempted frequency and the lowest over_voltage at the given frequency? It can also be set to a negative number. I'm interested in how much undervolting counts in temperatures.
26) Message boards : Number crunching : Running on a 4GB Raspberry Pi 4 - How to? (Message 95400)
Posted 26 Apr 2020 by bkil
Post:
Could you please answer the questions posted above?
27) Message boards : Number crunching : Running Rosetta on Raspberry Pi 3B+ (how to guide) (Message 95399)
Posted 26 Apr 2020 by bkil
Post:
Newer versions of zram also an option for mem_limit, have a look here:
https://www.kernel.org/doc/html/latest/admin-guide/blockdev/zram.html
You could set it to 70-80% of RAM for example while also keeping the max size at 200%. This would enable utilizing the swap as well (although a kernel with zswap would be more ideal for this).

After you measure the overhead that deflate causes (is it kswapd?), you may try some other compression algorithm as well.

I'll soon have a look at whether I can patch out malloc via LD_PRELOAD to enable KSM. That could help a lot.

You may want to see whether setting /proc/sys/vm/page-cluster to 0 could help with the zram overhead.
https://www.kernel.org/doc/html/latest/admin-guide/sysctl/vm.html#page-cluster
28) Message boards : Number crunching : Tells us your thoughts on granting credit for large protein, long-running tasks (Message 95348)
Posted 25 Apr 2020 by bkil
Post:
I've already answered some of your questions above regarding efficiency and whatnot:
- https://boinc.bakerlab.org/rosetta/forum_thread.php?id=13833&postid=95140

If battery use is an issue for you, see also:


You can actually compute an approximate performance/watt quite easily from the CPU list shared earlier in this thread and some Wikipedia or datasheet lookups for power consumption.

BOINC can run on many Android phones in the background regardless of whether you are using it or not. An aging phone from many years ago can still crank out as much RAC as a Raspberry Pi 4. It is usually set up so it only computes when it is on charger and having finished the charging cycle during the night. At the same time, phones with iOS can only compute with DreamLab while the screen is on.

As you've rightly noted that a PC is more universal and supports more projects. Although an SBC can be more power efficient credit/watt or credit/$ and could take up less space, but you will need to maintain more nodes. With the right tools and experience, this shouldn't be an issue, but you should keep this in mind.

So although we may not be able to declare a clear winner, it's good to be aware of all the options.

29) Message boards : Number crunching : Running on a 4GB Raspberry Pi 4 - How to? (Message 95341)
Posted 25 Apr 2020 by bkil
Post:
It's good to know that a heat sink can prove enough in milder climates. At first, Raspberry Pi 4 was known to only run acceptably with a fan.

Did you apply all techniques for saving power and reducing temperatures mentioned in this thread? I think these were:

30) Message boards : Number crunching : The most efficient cruncher rig possible (Message 95151)
Posted 22 Apr 2020 by bkil
Post:
I see. I found a few positives signs for this chipset, but I haven't found a concrete answer for this motherboard yet:



I guess we'll just have to try and if it won't boot, I'll grab some cheap/second hand option preferably from the non-power hungry passive kind, like a GT710 for $40, a GT210 for $15 or an Nvidia P283 for $5, though this would cost dearly in electricity, so I hope they underclock and undervolt well.

31) Message boards : Number crunching : Tells us your thoughts on granting credit for large protein, long-running tasks (Message 95143)
Posted 22 Apr 2020 by bkil
Post:
I think your question is off-topic here, but let me give a TL;DR.

I can see under your account that you have dozens of in progress WU's. Please visit computing preferences under your account and reduce your store at least ... and store up to additional ... values. They should probably sum to be less than 1 day, even down to 0.1+0.1days during debugging while BOINC is learning your processing rate.

According to this task, it indeed took 24 hours of CPU to complete 195 decoys:
https://boinc.bakerlab.org/rosetta/result.php?resultid=1153332354
Please double check the target CPU runtime in your Rosetta@home preferences under your account. It defaults to 8 hours, although 24 hours should be still doable. Deadlines are around 3 days I think.
32) Message boards : Number crunching : Tells us your thoughts on granting credit for large protein, long-running tasks (Message 95142)
Posted 22 Apr 2020 by bkil
Post:
I've processed the CPU list table from the above post. Because the sum is much less than the one on the homepage, I think this may include any registered member on the project, not only the active members. Also note that HT CPU's are overrated at least 50% in the total stats (they simply multiply thread count by per-thread flops). As HT is much more prominent at the high end than the low end (envision Celerons/Pentiums), this skews the stats even more towards the right.
21428.9 TFlops;97.8928 GFlops/host mean;218902 host
20.34 GFlops/host median
  64915 < 5 GFlops
  10834 < 10 GFlops
  19593 < 15 GFlops
  12726 < 20 GFlops
  11298 < 25 GFlops
  10666 < 30 GFlops
   8273 < 35 GFlops
   5766 < 40 GFlops
   1993 < 45 GFlops
   2626 < 50 GFlops
   1451 < 55 GFlops
   3696 < 60 GFlops
   2406 < 65 GFlops
   1783 < 70 GFlops
   1363 < 75 GFlops
   1437 < 80 GFlops
   2547 < 85 GFlops
   1959 < 90 GFlops
   4437 < 95 GFlops
    198 < 100 GFlops
    332 < 105 GFlops
    133 < 110 GFlops
     22 < 115 GFlops
    298 < 120 GFlops
  28904 < 125 GFlops
    102 < 135 GFlops
    452 < 140 GFlops
    404 < 145 GFlops
    228 < 150 GFlops
     22 < 160 GFlops
    355 < 165 GFlops
     14 < 175 GFlops
     15 < 180 GFlops
     23 < 195 GFlops
     21 < 200 GFlops
     20 < 205 GFlops
     20 < 210 GFlops
     11 < 215 GFlops
     19 < 220 GFlops
     16 < 225 GFlops
    174 < 245 GFlops
    126 < 250 GFlops
     12 < 275 GFlops
     19 < 290 GFlops
     30 < 315 GFlops
     55 < 335 GFlops
    135 < 380 GFlops
     14 < 405 GFlops
     11 < 630 GFlops
     47 < 645 GFlops
  16686 < 830 GFlops
33) Message boards : Number crunching : Tells us your thoughts on granting credit for large protein, long-running tasks (Message 95140)
Posted 22 Apr 2020 by bkil
Post:
Yes, I agree that we should not crunch on everything. I meant to say on every computer where it is worth it, as per my thread "The most efficient cruncher rig possible". Sorry this part of the sentence got lost - I had to retype this message because no drafts are saved on this forum.

We should do exact computations on this, but my gut feeling is that crunching on normal, non-extreme, non-server hardware can be at least somewhat efficient if it is:
    - more recent than 5 years
    - more recent than 10 years underclocked
    - more recent than 10 years portable



You could actually produce a histogram/median/average of our current fleet from this data: https://boinc.bakerlab.org/rosetta/cpu_list.php

Although I think the machine distribution is quite skewed towards the higher end compared to the general population, so it shouldn't be considered representative. Also note that computers are usually recycled after about 15-20 years of age in general, so you shouldn't see a large number of them in operation anyway.

By "only a few top notch computers" I meant that I expect them to be much less than 1% of the population according to my gut feeling.

Also, I expect that most deployed high performance computers already serve a purpose and usually couldn't offer their unused capacity for volunteer computing, as a given company made a big investment to purchase and operate them. On the other hand, there exist a vast amount of computers just sitting there all day long in businesses, schools and homes. If we assume that we are only talking about the more efficient crunchers, the benefit of their computation should far outweigh their cost in electricity.

And if you are not running 24/7 but are running BOINC in the background with low priority, it still has higher energy efficiency due to the components that are shared between a given project and the user. For example, if a user's machine idles at 30W, then the +60W CPU power cost would be less than operating a dedicated cruscher at 90W either in their own home, or in a separate lab. Thus reducing global warming, and also producing less electric waste (less servers to manufacture - less of them to dispose of).

34) Message boards : Number crunching : The most efficient cruncher rig possible (Message 95139)
Posted 22 Apr 2020 by bkil
Post:
Thank you for the insights.

Please see the top post about the other parts of my requirements. No GPU should be needed for operation and I could borrow one for installation if it doesn't support PXE out of the box. I hope it boots without a GPU after setup, though... haven't seen one that doesn't.

The more features a motherboard has, usually the greater its power consumption, so I would be extra reluctant to pay for any feature that shall not be used. Also the more expensive board one gets today reduces the budget for the next board 5 years from now. Also a friendly discount is available for this specific RAM kit and motherboard at the moment.

If I decided to pop in a GPU after a few years in the end, it would still definitely not be 6 of them and then the PSU would be prove to be underpowered for that build. I follow the policy to build something that you are sure will be optimal right now, not build something that may sometime have features that could "come in useful". It's usually more efficient use of resources.

Also, if such a build turns out okay, I could recommend it to friends as well and maybe get a second one too and/or swap with my peers when they would also be building something. So if the original requirements outlined in the top post would be extended with GPU folding support, a second, GPU-specific "most efficient build" forum thread should be created (lot's of questions there, regarding cooling, power, risers, I/O usage, such tweaks that you mention, probably better motherboards, how many CPU cores of what kind per GPU, what kind of GPU for most efficiency, etc).
35) Message boards : Number crunching : The most efficient cruncher rig possible (Message 95137)
Posted 22 Apr 2020 by bkil
Post:
That's actually a good question.

Have you seen a motherboard that wouldn't boot without a GPU? If yes, do you know a way I could look this fact up on this specific model?

As per the top post, a GPU can be borrowed during installation (or I could image the SSD directly using an external enclosure on a different computer), but no GPU would be needed for operation.
36) Message boards : Number crunching : Running Rosetta on Raspberry Pi 3B+ (how to guide) (Message 95136)
Posted 22 Apr 2020 by bkil
Post:
I see you have since found the separate topic for Raspberry Pi 4, I would post rPi4 specific questions there instead.

Comparing the rPi3 and rPi4, the latter consumes more, but should also produce more. We don't have concrete numbers about the performance/watt of each, but my gut feeling is that the rPi4 should be a little bit better. Both should be able to run for hours if you fit a huge heat sink, and/or a medium sized heatsink and a fan for rPi4.

I've also posted some rPi4 thermal tests in the other thread.
37) Message boards : Number crunching : The most efficient cruncher rig possible (Message 95127)
Posted 22 Apr 2020 by bkil
Post:
Do you think this one could do the trick for now, costing $525 upfront?

38) Message boards : Number crunching : Tells us your thoughts on granting credit for large protein, long-running tasks (Message 95118)
Posted 22 Apr 2020 by bkil
Post:
Folding@home offered various bonuses during its lifetime, I think they had beta bonus for completing WUs that were not correctly calibrated yet or may crash, big bonus for upper-end requirement outliers, bigadv for those tasks requiring lots of cores, lots of runtime and lots of RAM, and a quick return bonus if for completing short deadlines and running 24/7.


Over time, this resulted in people running 24/7, upgrading their boxes, and generally better equipped computers to join.

On the flip side, this caused people with less than top-notch hardware to leave or not join in the first place because they didn't feel their contribution to be competitive or valuable.

On the contrary, volunteer computing works the best if as many join as possible (up to a certain level of energy efficiency, like up to 10 years old hardware) - every little counts. We only have a few top notch 32+ cores machines with beefy GPUs around the world, but if we contributed every phone, tablet and low-mid end office machine, typically with 2-4 cores, our computing capacity could increase by orders of magnitude. (I.e., we have way less than a million hosts and there exist billions of personal computing devices in the world)

39) Message boards : Number crunching : Running Rosetta on Raspberry Pi 3B+ (how to guide) (Message 95064)
Posted 21 Apr 2020 by bkil
Post:
I've read a test some time ago that compared wifi vs. Bluetooth tethering from a phone and for light browsing use cases, Bluetooth consumed way less. Unfortunately I couldn't find this reference right now.

Also Bluetooth 4/5 has been designed for always-on operation and should vastly improve on power efficiency still, so I wouldn't be surprised if it would still be the winner. How to set it up is another question, I think you would be a pioneer in that as very few use Bluetooth, it seems.

As wifi also has power saving (correct AP settings can improve this), so the claimed ~100mW idling consumption sounds plausible. Even if Bluetooth consumed 1/10th as much. consider that if you modulated this to connect only in 1/48th of the time, the average again comes down to 2mW where we are nearing diminishing returns (compared to 300mW ethernet and the 5W Pi itself). I think the LEDs should also consume about 25mW each.
40) Message boards : Number crunching : Running Rosetta on Raspberry Pi 3B+ (how to guide) (Message 95043)
Posted 21 Apr 2020 by bkil
Post:
If you look at the Raspberry Pi 3 of the original poster, you can see that it is crunching correctly without an issue:


We can only tell how the power efficiency of a Pi3 vs. Pi3 vs Ryzen compare if somebody posts power consumption estimates after having done every available power optimization step.

For best results, you should have an AP in each room you are placing a node in. You may consider connecting them either via cabling or by configuring them to be WDS repeaters or mesh routers, ideally on a separate backhaul band if it is dual band. OpenWrt can do all this on many cheap routers. You can get a cheap OpenWrt-capable wifi router with external antenna for $5-10, it is usually much more reliable in most use cases to spread out three of these in your house on separate channels than to try to purchase the most powerful one available.

Just for kicks, you may even experiment with building a Bluetooth piconet, Wifi P2P Direct or mesh networking between the Pi nodes as well, not sure how well its hardware is suitable for this.



Previous 20 · Next 20



©2022 University of Washington
https://www.bakerlab.org