Newbie Question : Dif Points on the same Crunching Time

Message boards : Number crunching : Newbie Question : Dif Points on the same Crunching Time

To post messages, you must log in.

AuthorMessage
Profile Rafael C Valente

Send message
Joined: 14 Apr 20
Posts: 2
Credit: 18,196,210
RAC: 0
Message 95270 - Posted: 24 Apr 2020, 4:17:30 UTC
Last modified: 24 Apr 2020, 4:32:08 UTC

Guys, I'm new on Rosetta, but something s very weird or someone can explain to me what is going on.

2 Machines ( 1 is Virtual VMWARE with 15 processors , the second one is real with 16 processors, both processors are nearly the same btw... ) both with the same time for crunching, but the virtual is doubling the points... same boinc client, same windows version... Why?



I just want to understand to try to optimize even more my boxes...

Thanks in advance!!!
ID: 95270 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 95272 - Posted: 24 Apr 2020, 4:55:01 UTC - in response to Message 95270.  
Last modified: 24 Apr 2020, 4:55:36 UTC

Interestingly, your host with twice as much memory is NOT the one scoring the higher points.

However, looking at the benchmarks between the two machines:
the 16GB VM machine, which got twice the credits
     Measured floating point speed   1000 million ops/sec
     Measured integer speed          1000 million ops/sec
the 32GB machine, which got half as many credits
     Measured floating point speed   3333.31 million ops/sec
     Measured integer speed          9428.7 million ops/sec


So, that's not making a lot of sense just on the chalkboard. The machine with triple the benchmark got half the credit per unit time.

I'm just showing these facts so others can more easily offer analysis and comment as well (your profile is set to hide your computers, so I had to fill in the URLs with the tasks numbers to see detail). I should point out that it looks like the relative disparity of credit spans about a dozen WUs on each machine.

I can just offer that credit is based on how hard the specific tasks are to compute (as based on the experience of the machines reporting in before you for that same specific WU batch). And that credit is granted per model completed. Your high credit WU completed 169 models (decoys), vs. 90 models on the other machine. So, on a per model completed basis, the credits between the two are very close.
Rosetta Moderator: Mod.Sense
ID: 95272 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Rafael C Valente

Send message
Joined: 14 Apr 20
Posts: 2
Credit: 18,196,210
RAC: 0
Message 95273 - Posted: 24 Apr 2020, 5:08:49 UTC - in response to Message 95272.  
Last modified: 24 Apr 2020, 5:09:28 UTC

Interestingly, your host with twice as much memory is NOT the one scoring the higher points.

However, looking at the benchmarks between the two machines:
the 16GB VM machine, which got twice the credits
     Measured floating point speed   1000 million ops/sec
     Measured integer speed          1000 million ops/sec
the 32GB machine, which got half as many credits
     Measured floating point speed   3333.31 million ops/sec
     Measured integer speed          9428.7 million ops/sec


So, that's not making a lot of sense just on the chalkboard. The machine with triple the benchmark got half the credit per unit time.

I'm just showing these facts so others can more easily offer analysis and comment as well (your profile is set to hide your computers, so I had to fill in the URLs with the tasks numbers to see detail). I should point out that it looks like the relative disparity of credit spans about a dozen WUs on each machine.

I can just offer that credit is based on how hard the specific tasks are to compute (as based on the experience of the machines reporting in before you for that same specific WU batch). And that credit is granted per model completed. Your high credit WU completed 169 models (decoys), vs. 90 models on the other machine. So, on a per model completed basis, the credits between the two are very close.


Thanks for your initial analysis on this... this could be related to HT on the processors? Could the HT dividing the crunch decoys per WU? But even disabling the HT, the final credit will still the same results like 8 processors doing let's suppose 800 and with HT enable the processor count will rise to 16 so 400 points per WU... I'm really confused...

Let's wait...

Kind Regards!

RV
ID: 95273 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 95311 - Posted: 24 Apr 2020, 15:17:15 UTC

Hyper-threading doubles the amount of work you are trying to squeeze through the CPU's L2/L3 cache. Rosetta memory footprint is large. Not hyper-threading obviously runs less active threads, but also improves the odds of finding the next needed thing in L2/L3 cache, which, for a memory intensive application, can basically recoup the loss of running half as many threads.

How many physical cores on on the two machines?
Rosetta Moderator: Mod.Sense
ID: 95311 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
ProDigit

Send message
Joined: 6 Dec 18
Posts: 27
Credit: 2,718,346
RAC: 0
Message 95448 - Posted: 27 Apr 2020, 22:16:04 UTC

Same results here, even worse on ARM.
ARM 1,9Ghz quad core PPD is like NOTHING compared to x86 at 3,5Ghz.
I mean, it's not half, not even a quarter, but PPD for ARM is like 20x lower PPD:

https://boinc.bakerlab.org/rosetta/results.php?userid=2031893
ID: 95448 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
CIA

Send message
Joined: 3 May 07
Posts: 100
Credit: 21,059,812
RAC: 0
Message 95450 - Posted: 27 Apr 2020, 22:51:46 UTC - in response to Message 95272.  

Interestingly, your host with twice as much memory is NOT the one scoring the higher points.

However, looking at the benchmarks between the two machines:
the 16GB VM machine, which got twice the credits
     Measured floating point speed   1000 million ops/sec
     Measured integer speed          1000 million ops/sec
the 32GB machine, which got half as many credits
     Measured floating point speed   3333.31 million ops/sec
     Measured integer speed          9428.7 million ops/sec


So, that's not making a lot of sense just on the chalkboard. The machine with triple the benchmark got half the credit per unit time.

I'm just showing these facts so others can more easily offer analysis and comment as well (your profile is set to hide your computers, so I had to fill in the URLs with the tasks numbers to see detail). I should point out that it looks like the relative disparity of credit spans about a dozen WUs on each machine.

I can just offer that credit is based on how hard the specific tasks are to compute (as based on the experience of the machines reporting in before you for that same specific WU batch). And that credit is granted per model completed. Your high credit WU completed 169 models (decoys), vs. 90 models on the other machine. So, on a per model completed basis, the credits between the two are very close.


Just a data point. The Measured floating/integer speed showing an even 1000 million ops/sec is means you need to re-run BOINC CPU benchmarks (under the tools pulldown). After that completes update Rosetta under the projects tab and your host will show the accurate ops numbers. 1000 million ops is the default reading.
ID: 95450 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : Newbie Question : Dif Points on the same Crunching Time



©2024 University of Washington
https://www.bakerlab.org