Are these benchmarks right?

Message boards : Number crunching : Are these benchmarks right?

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
tw56

Send message
Joined: 13 Dec 05
Posts: 7
Credit: 519,161
RAC: 0
Message 16440 - Posted: 17 May 2006, 13:32:25 UTC

I've run the benchmarks a couple of times and its about the same. New computer has fedora core 5 dual Xeon 2.80 ghz processors 2 gig memory. When I run benchmarks is says:
Processor :4 GenuineIntel Xeon 2.8 ghz.
Memory: 1.97 GB phyical 1.94 GB virtual.
Benchmark results:
Number or cpus 4
553 double precision MIPS(Whetstone) per CPU
1114 integer MIPS (DRYSTONE) per CPU
I was hoping to get better with this box!
My other computer a regular P4 2.8 ghz winxp reports
Number of cpus 2
1217 Whetstone
1221 Drystone
Is this normal for the Xeon to have such a low Whetstone?

ID: 16440 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
tw56

Send message
Joined: 13 Dec 05
Posts: 7
Credit: 519,161
RAC: 0
Message 16443 - Posted: 17 May 2006, 14:29:17 UTC

Well I found this message in another thread. Doesn't seem fair. So the problem is not in my Xeon but the Linux client.

"The "standard" (from Berkeley University) executable of BOINC for Linux benchmarks at roughly 50% of the speed, than BOINC for Win does (i.e. both running on the exact same hardware).

So the credit claims (handled by BOINC for most projects) are much less, although Rosetta science application itself does the same amount of real work for Rosetta@home (probably even more under Linux).

The above facts are for BOINC v5.2.13, I don't know if the new BOINC v5.4.9 has fixed this problem" written by Dimitris Hatzopoulos .

<img src="http://www.boincstats.com/signature/team_43199.gif" height="100" width="205" alt="team.gif" />

ID: 16443 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Moderator9
Volunteer moderator

Send message
Joined: 22 Jan 06
Posts: 1014
Credit: 0
RAC: 0
Message 16444 - Posted: 17 May 2006, 14:31:36 UTC - in response to Message 16443.  
Last modified: 17 May 2006, 14:31:54 UTC

Well I found this message in another thread. Doesn't seem fair. So the problem is not in my Xeon but the Linux client.
...

It is not fair. You should raise the issue with the BOINC development team.
Moderator9
ROSETTA@home FAQ
Moderator Contact
ID: 16444 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dag
Avatar

Send message
Joined: 16 Dec 05
Posts: 106
Credit: 1,000,020
RAC: 0
Message 16459 - Posted: 17 May 2006, 18:11:16 UTC
Last modified: 17 May 2006, 18:15:12 UTC

Could I ask a related question comparing Win 2003 Server to Win XP? I have a 2.8GHz Xeon processor (only using one thread) running Win 2003 Std Server that
benchmarks at 1.46 and 2.95 G.IOps.

I also have a 2.0GHz Pentium M running Win XP Pro that
benchmarks at 1.55 and 3.19 G.IOps.

The 2.8GHz processor BMs a few percent lower than the 2.0GHz processor. Is it the HW or the SW?
dag
--Finding aliens is cool, but understanding the structure of proteins is useful.
ID: 16459 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Moderator9
Volunteer moderator

Send message
Joined: 22 Jan 06
Posts: 1014
Credit: 0
RAC: 0
Message 16464 - Posted: 17 May 2006, 18:28:23 UTC - in response to Message 16459.  

Could I ask a related question comparing Win 2003 Server to Win XP? I have a 2.8GHz Xeon processor (only using one thread) running Win 2003 Std Server that
benchmarks at 1.46 and 2.95 G.IOps.

I also have a 2.0GHz Pentium M running Win XP Pro that
benchmarks at 1.55 and 3.19 G.IOps.

The 2.8GHz processor BMs a few percent lower than the 2.0GHz processor. Is it the HW or the SW?

It could be one or both. But it could also be bus speeds, or other transport issues. It could be slower memory in one system as well. A lot goes into the benchmark final numbers as it is a test of a lot of elements of the system.
Moderator9
ROSETTA@home FAQ
Moderator Contact
ID: 16464 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Feet1st
Avatar

Send message
Joined: 30 Dec 05
Posts: 1755
Credit: 4,690,520
RAC: 0
Message 16468 - Posted: 17 May 2006, 18:55:04 UTC - in response to Message 16464.  

Could I ask a related question comparing Win 2003 Server to Win XP? I have a 2.8GHz Xeon processor (only using one thread) running Win 2003 Std Server that
benchmarks at 1.46 and 2.95 G.IOps.

I also have a 2.0GHz Pentium M running Win XP Pro that
benchmarks at 1.55 and 3.19 G.IOps.

The 2.8GHz processor BMs a few percent lower than the 2.0GHz processor. Is it the HW or the SW?

It could be one or both. But it could also be bus speeds, or other transport issues. It could be slower memory in one system as well. A lot goes into the benchmark final numbers as it is a test of a lot of elements of the system.


This is why some CPU makers are no longer stressing the GHz of the processors. It's not about GHz, it's about ability to do work. The chips have multiple instructions in progress simultaneously, even without a dual core. And the ability to overlap instructions in the pipe varies on different CPUs.
Add this signature to your EMail:
Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might!
https://boinc.bakerlab.org/rosetta/
ID: 16468 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 16470 - Posted: 17 May 2006, 19:15:08 UTC

The Pentium III and the Pentium M have a reasonable sized pipeline - and do a reasonable amount per clock cycle; so the frequency of the processor doesn't have to be that high.
The Pentium IV was designed with a really long pipeline that does much less per clock cycle and requires high frequency to perform the same work as a shorter pipelined cpu.

I imagine that you've got a Xeon based on a Pentium IV core; and you're probably taking a small performance hit on the memory as well (if it requires ECC memory like most dual processor motherboards).

Although the Athlon XPs and newer have strange ratings that are supposed to be based on the old Athlon cpu, the new speed ratings came out in answer to the Pentium IV line. Which is why my single core Athlon 64 754 pin 2Ghz cpu is rated as a 3000+. (i.e. roughly equivalent to a 3Ghz P4)

ID: 16470 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Aglarond

Send message
Joined: 29 Jan 06
Posts: 26
Credit: 446,212
RAC: 0
Message 16498 - Posted: 18 May 2006, 0:26:23 UTC
Last modified: 18 May 2006, 0:27:47 UTC

I've run the benchmarks a couple of times and its about the same. New computer has fedora core 5 dual Xeon 2.80 ghz processors 2 gig memory.

I had similar problem with Pentium IV 3.2Ghz HT processor running Linux. Combination of HT and Linux gives extra low benchmarks. I have found several people that were recommending using optimized Boinc core client to compensate for this. I have tried several of them and Crunch3r's was most stable and reliable for me. If you want, you may give it a try:
http://calbe.dw70.de/boinc.html
The best version for you will probably be:
Boinc 5.2.14 P4/Athlon64 SSE2
After some experimenting you should be able to find optimized Boinc client, that will give you approximately the same amount that Windows clients gets.
ID: 16498 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Dimitris Hatzopoulos

Send message
Joined: 5 Jan 06
Posts: 336
Credit: 80,939
RAC: 0
Message 16539 - Posted: 18 May 2006, 14:34:48 UTC - in response to Message 16498.  
Last modified: 18 May 2006, 14:36:03 UTC

I had similar problem with Pentium IV 3.2Ghz HT processor running Linux. Combination of HT and Linux gives extra low benchmarks.


A minor correction: it's not "the combination of HT and Linux" that gives low benchmarks.

It's just that the "standard" BOINC v5.2.13 binary for Linux from Berkeley University is compiled in a way that its benchmark reports roughly HALF the speed (in FLOPS), than the BOINC v5.2.13 binary for Windows, running on the very same hardware.

The science apps (Rosetta, SETI, simap etc) doing the "real" work, might actually be doing MORE work per hour on the same PC under Linux than under Windows, since the OS itself is more efficient.

So, as you correctly point out, the solution to get "fair" credits using Linux, is to use an alternate BOINC client, like Crunch3r's.

Recent progress in the BOINC world is towards awarding credits based on real work done (which I expect to upset some people accustomed to ultra-high benchmarks fitting in L2 cache, as in the case of real science apps accessing memory, often 3GHz and 2GHz PCs might actually doing roughly the same amount of real work per unit of time).
Best UFO Resources
Wikipedia R@h
How-To: Join Distributed Computing projects that benefit humanity
ID: 16539 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
MikeMarsUK

Send message
Joined: 15 Jan 06
Posts: 121
Credit: 2,637,872
RAC: 0
Message 16781 - Posted: 21 May 2006, 18:33:11 UTC - in response to Message 16539.  
Last modified: 21 May 2006, 18:43:07 UTC

...

Recent progress in the BOINC world is towards awarding credits based on real work done (which I expect to upset some people accustomed to ultra-high benchmarks fitting in L2 cache, as in the case of real science apps accessing memory, often 3GHz and 2GHz PCs might actually doing roughly the same amount of real work per unit of time).


I do hope that Rosetta moves towards this model soon :-) I have two PCs, one AMD, one Intel, both are roughly similar in speed, but the AMD appears to get twice the credit (on Rosetta) that the Intel does. The other projects I crunch for (CPDN, Seasonal, BBC/CCE) all give points based on the work unit rather than the benchmarks, which I think is a 'good thing'.

Incidentally, I noticed that the credits per work unit have gone up by 50% or so on the Intel box since 5.16, which makes things fairer I feel. I haven't run Rosetta on the higher-scoring AMD recently so can't compare, since it's currently dedicated exclusively to a 3 month work unit...

ID: 16781 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Moderator9
Volunteer moderator

Send message
Joined: 22 Jan 06
Posts: 1014
Credit: 0
RAC: 0
Message 16784 - Posted: 21 May 2006, 20:40:15 UTC - in response to Message 16781.  

...

Recent progress in the BOINC world is towards awarding credits based on real work done (which I expect to upset some people accustomed to ultra-high benchmarks fitting in L2 cache, as in the case of real science apps accessing memory, often 3GHz and 2GHz PCs might actually doing roughly the same amount of real work per unit of time).


I do hope that Rosetta moves towards this model soon :-) I have two PCs, one AMD, one Intel, both are roughly similar in speed, but the AMD appears to get twice the credit (on Rosetta) that the Intel does. The other projects I crunch for (CPDN, Seasonal, BBC/CCE) all give points based on the work unit rather than the benchmarks, which I think is a 'good thing'.

Incidentally, I noticed that the credits per work unit have gone up by 50% or so on the Intel box since 5.16, which makes things fairer I feel. I haven't run Rosetta on the higher-scoring AMD recently so can't compare, since it's currently dedicated exclusively to a 3 month work unit...


The code in 5.16 is slightly more efficient.
Moderator9
ROSETTA@home FAQ
Moderator Contact
ID: 16784 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1831
Credit: 119,627,225
RAC: 10,243
Message 16792 - Posted: 21 May 2006, 22:04:09 UTC - in response to Message 16784.  

The code in 5.16 is slightly more efficient.[/quote]

I might be showing my naivety here, but how does Rosetta being more efficient affect the credits?
ID: 16792 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Moderator9
Volunteer moderator

Send message
Joined: 22 Jan 06
Posts: 1014
Credit: 0
RAC: 0
Message 16793 - Posted: 21 May 2006, 22:28:35 UTC - in response to Message 16792.  
Last modified: 21 May 2006, 22:31:05 UTC

The code in 5.16 is slightly more efficient.


I might be showing my naivety here, but how does Rosetta being more efficient affect the credits?[/quote]
For any given set of benchmark values if the application runs faster the credits will be higher as well. The reverse is also true. So if the application runs a little better, then it will claim higher credits for any given benchmark. This may change under the Flops count credit system.

I suspect you are also seeing the work unit producing more model per unit time than earlier versions as well.

Moderator9
ROSETTA@home FAQ
Moderator Contact
ID: 16793 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
MikeMarsUK

Send message
Joined: 15 Jan 06
Posts: 121
Credit: 2,637,872
RAC: 0
Message 16799 - Posted: 21 May 2006, 23:10:44 UTC - in response to Message 16784.  

...The code in 5.16 is slightly more efficient.


Excellent, increases in efficiency is very good news for the project science :-) so congratulations to the team.


ID: 16799 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Moderator9
Volunteer moderator

Send message
Joined: 22 Jan 06
Posts: 1014
Credit: 0
RAC: 0
Message 16800 - Posted: 21 May 2006, 23:19:07 UTC - in response to Message 16799.  
Last modified: 21 May 2006, 23:21:25 UTC

...The code in 5.16 is slightly more efficient.


Excellent, increases in efficiency is very good news for the project science :-) so congratulations to the team.

Well just to be clear I did not produce ANY of the improvements.

Bin, Rhiju, and David K are are the bottom of the improvements, and there are more to come. You should see significant reductions in memory footprint, and download file sizes in the next version. This is all in response to comments from all of you. They are listening and incorporating your ideas, so all of you are also responsible for the improvements.

And you are right, these guys are doing very good work under very difficult circumstances. MAny issue do not arise even after testing in RALPH, until the full spectrum of systems in the production environment are applied to the work.

Moderator9
ROSETTA@home FAQ
Moderator Contact
ID: 16800 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Team_Elteor_Borislavj~Intelligence

Send message
Joined: 7 Dec 05
Posts: 14
Credit: 56,027
RAC: 0
Message 16822 - Posted: 22 May 2006, 10:19:33 UTC

If you want to compare, with another Dual Xeon:
Starting BOINC client version 5.5.0 for windows_intelx86
libcurl/7.14.0 OpenSSL/0.9.8 zlib/1.2.3
Data directory: C:Program FilesBOINC
Processor: 2 GenuineIntel Intel(R) XEON(TM) CPU 2.20GHz
Memory: 1023.42 MB physical, 2.40 GB virtual
Disk: 76.32 GB total, 63.71 GB free
Running CPU benchmarks
Benchmark results:
Number of CPUs: 2
2509 floating point MIPS (Whetstone) per CPU
4609 integer MIPS (Dhrystone) per CPU
Finished CPU benchmarks

2 Physical CPU's, i disabled HT

ID: 16822 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Carlos_Pfitzner
Avatar

Send message
Joined: 22 Dec 05
Posts: 71
Credit: 138,867
RAC: 0
Message 17294 - Posted: 29 May 2006, 7:19:15 UTC - in response to Message 16781.  

...

Recent progress in the BOINC world is towards awarding credits based on real work done (which I expect to upset some people accustomed to ultra-high benchmarks fitting in L2 cache, as in the case of real science apps accessing memory, often 3GHz and 2GHz PCs might actually doing roughly the same amount of real work per unit of time).


I do hope that Rosetta moves towards this model soon :-) I have two PCs, one AMD, one Intel, both are roughly similar in speed, but the AMD appears to get twice the credit (on Rosetta) that the Intel does. The other projects I crunch for (CPDN, Seasonal, BBC/CCE) all give points based on the work unit rather than the benchmarks, which I think is a 'good thing'.

Incidentally, I noticed that the credits per work unit have gone up by 50% or so on the Intel box since 5.16, which makes things fairer I feel. I haven't run Rosetta on the higher-scoring AMD recently so can't compare, since it's currently dedicated exclusively to a 3 month work unit...


-----


Indeed AMD produces about two times the work of a P4 clock by clock

On a P4 1800 MHZ I crunch one einsten WU on 6 hours
while on a AMD 1600 MHZ I crunch one einstein WU on *one* (1) hour.

So, Indeed AMD must get more credits than P4 !
*More than 6 times the work done of a simimilar CPU at same clock speed !


BTW: This fact is *not* a rosetta problem.
And this discussion must me moved to a AMD or a Intel Thread

However,
wanting that rosetta grant ths *same* credits to Intel
than the credits that are granted to AMD only cause *equal* clock speed is
a overkill

*even* with *same* FLOPS AMD do twice the work Intel do

*These CPUs do have a different overall design.

So, 1 Flop AMD == 2 Flops Intel

Thus, Real work done is *not* the same thing as countig "FLOPS"

Simap , for example uses *only* Integer arithmetic.

So, FLOPS has no value at all ! (for simap)

*on 1 CPU cycle, a AMD 3DNow! instruction do more than 200 flops do -:!

ps: How much Intel has payd you for this proposal ?
Click signature for global team stats
ID: 17294 · Rating: -1 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 17328 - Posted: 30 May 2006, 1:44:25 UTC

*even* with *same* FLOPS AMD do twice the work Intel do

When referring to the long pipeline of the Intel p4 and it's need for high speed to get the same amount of work done as a slower clocked AMD - please refer to the P4. There are other Intel parts - both before the P4, and the current parts, that don't aren't setup the same as the P4. The Pentium III core was roughly equal to the AMD core in performance; and now the Pentium M and Core single/Duo parts gave up on the Pentium 4 architecture and are more like a Pentium III core. i.e. a 2Ghz Core Duo will score a bit better in Boinc benchmarks than the 2Ghz Athlon 64 parts; while it takes a 3Ghz P4 to match a 2Ghz Athlon 64 part.





ID: 17328 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
[AF>Linux]Arnaud
Avatar

Send message
Joined: 17 Sep 05
Posts: 38
Credit: 10,490
RAC: 0
Message 17912 - Posted: 7 Jun 2006, 10:00:29 UTC
Last modified: 7 Jun 2006, 10:48:11 UTC

Hello,
I'm using Linux.
My ressource share on CPDN, SAP and Rosetta are the same but the RAC is different:
CPDN:116
SAP:119
Rosetta:56

Is it considered cheating to use an optimised Boinc client to correct the credits and have the same RAC on the 3 projects?

I've just done a benchmarks with the trux client and I have the same results that with the official 5.4.9 client.
Does someone know if the calibrate_credit option in trux helps to correct the claim credits on Rosetta or it'll change nothing as the benchmarks are the same on the 2 Boinc clients?

On my machine with the 5.4.9 CC:
Benchmarks with Windows: 970/2037
Benchmarks with Linux: 533/1345
Thx
Arnaud
ID: 17912 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
MikeMarsUK

Send message
Joined: 15 Jan 06
Posts: 121
Credit: 2,637,872
RAC: 0
Message 17919 - Posted: 7 Jun 2006, 11:57:02 UTC
Last modified: 7 Jun 2006, 11:59:07 UTC

Credit isn't calibrated between different projects, so you'd expect the RACs to be different. I get around half the credit from Rosetta than other projects, but that's just 'how it is'.

Of course, sites and teams which add together credit from different projects are 'broken' in this respect, there should be some sort of interproject correction factor so that credit can be compared.

So my signature below is a little misleading.

ID: 17919 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
1 · 2 · Next

Message boards : Number crunching : Are these benchmarks right?



©2024 University of Washington
https://www.bakerlab.org