Message boards : Number crunching : Are INTEL systems not getting enough credit or are AMD systems getting too much?
Author | Message |
---|---|
TeAm Enterprise Send message Joined: 28 Sep 05 Posts: 18 Credit: 27,911,183 RAC: 203 |
I just ran across some stats that lead me to believe Intel systems are not getting enough credit or AMD systems are getting too much. Rather than write the whole story up here I would like to refer you to the TeAm Enterprise Forums and a specific thread: http://www.team-enterprise.org/smf/index.php?topic=102.msg688#msg688 I would like some of you to do your own digging and see if you find similar results. ;) Smoke |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
You've mentioned the time each machine spent on the WUs.. but not the benchmarks. If the Intel system (Dell) is configured like any other Dell, it's running NAV/McAfee, 20 needless apps, and will thus get lower benchmarks. You also didn't mention the amount of work done by each machine. There's a listing for each result that gives the number of decoys each made. And when comparing an identical WU run on the two systems (not different ones like you've shown), the Athlon will have run through half again as many (75/45) decoys/models as the Dell 2.8Ghz P4 system. You're comparing apples to oranges. |
Moderator9 Volunteer moderator Send message Joined: 22 Jan 06 Posts: 1014 Credit: 0 RAC: 0 |
I just ran across some stats that lead me to believe Intel systems are not getting enough credit or AMD systems are getting too much. While the names of the Work Units used in your comparison are similar, that is where the similarity ends. They are not the same. Moreover the credit claim is based on the benchmarks done by the BOINC package and has nothing to do with Rosetta. The benchmarks for these two systems are very likely quite different. I have run similar tests on my systems with Work units that I know to be the same and have found that there is variation for two identical systems running the same work. That variation is caused by two factors. The benchmarks are not identical even on two systems that are identical. This is caused by the nature of modern computers. Processes turn on and off automatically in the background, and it depends largely on what is running when the benchmark is taken. And two, the nature of the work being performed varies even on identical Rosetta Work Units. The Random number provided for each work unit can locate the processing either closer or farther from the low energy level which is the target of the search. This will affect the final results and the computing required to reach them. Moderator9 ROSETTA@home FAQ Moderator Contact |
TeAm Enterprise Send message Joined: 28 Sep 05 Posts: 18 Credit: 27,911,183 RAC: 203 |
I understand that finding two identical WUs would be next to impossible. I only cited two very similar WUs to make the point. I guess the only way to confirm if credit is being fairly awarded would be if Rosetta had a test WU like we used to have in SETI. Is there such a thing? If there isn't, can one be created? You guys probably know a lot more about this kind of stuff than I but it still looks fishy to me. ;) Crunch with friends - TeAm Anandtech |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
Smoke1: CPU type GenuineIntel Intel(R) Pentium(R) D CPU 2.80GHz Number of CPUs 2 Measured floating point speed 2059.64 million ops/sec Measured integer speed 4932.71 million ops/sec GreenHornet: CPU type AuthenticAMD AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ Number of CPUs 2 Measured floating point speed 3070.45 million ops/sec Measured integer speed 9285.48 million ops/sec Mine: 2Ghz 754pin cpu. CPU type AuthenticAMD AMD Athlon(tm) 64 Processor 3000+ Number of CPUs 1 Measured floating point speed 1877.87 million ops/sec Measured integer speed 3485.89 million ops/sec From what I'm seeing, I imagine that Smoke1 is probably running the default client, and GreenHornet is running an optimized Boinc client in addition to being overclocked. From [url=https://boinc.bakerlab.org/rosetta/forum_thread.php?id=1501#15351]here.] I get the following formula: "Here's the formula for "claimed credit" claimed credit = ([whetstone]+[dhrystone]) * wu_cpu_time_in_sec / 1728000" So we need to know whetstones, dhrystones, and time spent to determine the credit. |
Travis DJ Send message Joined: 2 May 06 Posts: 10 Credit: 537,572 RAC: 0 |
From what I'm seeing, I imagine that Smoke1 is probably running the default client, and GreenHornet is running an optimized Boinc client in addition to being overclocked. GreenHornet has a dual core cpu. So divide the fpops/iops by two and that's roughly how much work per cpu core is possible. It's probably not overclocked, the numbers make sense in relation to the Athlon64 3000+ you have. If you take into consideration the 'high' numbers of the X2 but remember it only has one memory interface and the wu_time/xxx factor then the claimed credit would be accurate .. In any case I thought the benchmark code in BOINC clients was identical across the board no matter if the client was 'optimized' or not. There is bound to be some variation as said above even on a same system- for that reason LHC@home implemented a system to average the claimed credit into the final granted credit of a given workunit. |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
https://boinc.bakerlab.org/rosetta/results.php?hostid=208361&offset=40 Greenhornet is producing roughly 160,000 seconds of work on the 8th. If, as you claim, the dual core score is doubled by fact that it was a dual core, the amount of time the Boinc client would report would be 86,400 seconds. If they get double the benchmark for having dual cores.. and double the time of a single core.. that's 4x the score for having around twice the performance. Here's a dual core running an optimized Boinc client - and you'll notice that the score is not at least 1.5 times GreenHornet.. so I surmise that GreenHornet is also running an optimized Boinc client. CPU type AuthenticAMD AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ Number of CPUs 2 Measured floating point speed 3679.26 million ops/sec Measured integer speed 11248.15 million ops/sec http://qah.uni-muenster.de/show_host_detail.php?hostid=4899 The Intel part is also a dual core - and it's running roughly the same as mine; (benchmark total of 6k versus my 5.5k) - which would embarass Intel to no end that a stock single core 2 year old AMD part is matching roughly matching the performance of a dual core Intel part from this year. The new Intel dual core parts are supposed to have better FP than our Athlons, so the values look right. Unless TeAm Enterprise returns and informs us that both GreenHornet and Smoke1 are running the same default client - then I'm assuming that one is running the default, and the other an optimized client; or they're running wildly different optimized clients. |
Travis DJ Send message Joined: 2 May 06 Posts: 10 Credit: 537,572 RAC: 0 |
Greenhornet is producing roughly 160,000 seconds of work on the 8th. If, as you claim, the dual core score is doubled by fact that it was a dual core, the amount of time the Boinc client would report would be 86,400 seconds. no no.. where you're wrong is each core processes one workunit at a time - so that means two workunits and not double the time (What you said was as if I get two guys to mow my two yards they'll do double the work in double the time; rather if they both mow both lawns simultaneously at the same speed they'll get double the work complete in the same amount of time compared to one man mowing one lawn at a time). The whet/dhry score is a combined number for both cores running for a fixed amount of time. Basically what you get is two workunits completing at (given a similar workunit) close to the same time as one workunit completing on a single core of the same speed. So, if you find a way to make time go faster than realtime, let me know so I can get some of that double-time performance. :) |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
Greenhornet is producing roughly 160,000 seconds of work on the 8th. If, as you claim, the dual core score is doubled by fact that it was a dual core, the amount of time the Boinc client would report would be 86,400 seconds. Okay.. you've conned me into loading Boinc on my work Athlon x2 3800+ just to see (as you claim) Boinc report double the benchmarks of my home system's single core 2Ghz part.. and I get.. 5/12/2006 4:51:09 PM||Starting BOINC client version 5.4.9 for windows_intelx86 5/12/2006 4:51:09 PM||Processor: 2 AuthenticAMD AMD Athlon(tm) 64 X2 Processor 3800+ 5/12/2006 4:51:09 PM||Memory: 511.48 MB physical, 1.22 GB virtual 5/12/2006 4:51:14 PM||Running CPU benchmarks 5/12/2006 4:52:13 PM||Benchmark results: 5/12/2006 4:52:13 PM|| Number of CPUs: 2 5/12/2006 4:52:13 PM|| 1920 floating point MIPS (Whetstone) per CPU 5/12/2006 4:52:13 PM|| 2703 integer MIPS (Dhrystone) per CPU (Nav still running.. so my benchmarks may be a little lower than they could have been.) ------ This sure doesn't seem double my home system's benchmarks: Measured floating point speed 1877.87 million ops/sec Measured integer speed 3485.89 million ops/sec ------ granted.. GreenHornet has been overclocked to 4200 or 4400+ timings (as reported by the owner in the linked thread) but the values of the benchmark are.. triple the Dhrystone ratings of my x2 3800+'s running at stock speed and using the default client. 1.5 times the Whetstone rating of my system. (i.e. it's using an optimized client). And you'll note that the 2Ghz dual core 3800+ gets roughly identical benchmarks to my single core 2Ghz Athlon 64 754pin part at home. Therefore.. we learn that 2Ghz dual core Athlon 64 cpus get boinc scores equal to a single core 2Ghz part, not double a single core Athlon 64 2Ghz part. If you run both cores on the project, you get 48 hours credited to your machine for every 24 hour period. work x2 system: (just to run benchmarks - not actually contributing) https://boinc.bakerlab.org/rosetta/show_host_detail.php?hostid=219572 home single core system: https://boinc.bakerlab.org/rosetta/show_host_detail.php?hostid=121218 And getting back to the pointless argument - if a dual core system was getting twice the benchmarks, it shouldn't also be getting twice the time of a single core system. Dual cores get up to 48 hours in a 24 hour period; but they get single core benchmarks. |
Moderator9 Volunteer moderator Send message Joined: 22 Jan 06 Posts: 1014 Credit: 0 RAC: 0 |
.... Okay.. you've conned me into loading Boinc on my work Athlon x2 3800+ just to see (as you claim) Boinc report double the benchmarks of my home system's single core 2Ghz part.. Gents, A dual core system will usually get the same benchmark as a single core all else being equal. The advantage is that by using both cores you can process twice the work per unit time. So in effect this will double the CPU seconds you report. Under those conditions a dual core system will Claim twice as much credit as an otherwise identical single core system would. Basically it gets twice the throughput. So you can look at this as twice the work in the same amount of time, or twice the credits in the same amount of time when compared to a singe core system. This also scales up to quad core systems. In any case the benchmarks will be the same for two systems clocked at the same speed no matter the number of cores, because the benchmark is a function of clocking. This of course ignores any effects of overclocking. That is a different subject Moderator9 ROSETTA@home FAQ Moderator Contact |
James Send message Joined: 27 Mar 06 Posts: 4 Credit: 23,809 RAC: 0 |
Overclocking is not entirely off-topic in that it does account for some of the 'discrepancies' of claimed credits. Specifically, if you take two x2 4800s and one is overclocked to 2.7 ghz and the other is at 2.4 ghz it will be comparing 'apples to oranges'. This assumes that the overclocking is done correctly, ie, the system is producing stable benchmarks (if you don't get stable benchmarks decrease your frequency and/or multiplier). I happen to have my 4800 overclocked to ~2.67ghz as well as major tweaks on the memory clock, voltages, etc. I replaced the stock fan almost immediately with one that is large, hardly noticeable (stock runs at 3.5k rpm at full load), and decreases load temps by 5 C. As for 'optimized' clients I believe the actual 'point' is to receive credits that relate to the systems actual performance rather than a boinc benchmark bias, which is all the client really does anyway, ie, the rosetta application does the crunching. Another interesting point is that benchmarks are actually favored toward Intel machines due to the default compilation. AMD 'flags' are not part of the default one size fits all benchmark program which isn't the case with some 'optimized' clients. The point for those that use optimized clients is to get a benchmark that treats their machine fairly, rather than treat it as an Intel (which is what the benchmark is basically is compiled for). As more and more people migrate to 'optimized' clients the incentive for others, who are in a 'competitive' frame of mind, increases. This *is* happening. It's becoming much more widespread. It also shouldn't be an issue as the benchmarks attempt to accurately represent the systems true performance. That aside, the best 'client' out there is Trux's calibrated client (non-optimized) that allows you to significantly 'play around' with boinc, ie, set cpu affinity, block the annoying popups, set process priority, set project priority, force results to be reported immediately, etc. It'd be nice if the boinc developers would do the same rather than have this proliferation occur, which sort of decreases their motivation to innovate - the boinc client/gui developers are also biased to the SETI project which has an optimized *application*. Einstein@home also has an optimized application that is going to be integrated as default in the future. It vastly increases computing of WUs. Again, claiming that there is 'unfair' credit suggests that the sole purpose of the project is credit when it is actually science that is important. I'd also like to note that 'optimized' clients do *not* report default inflated scores but are based on your specific processor's capabilities. (the two legit ones are trux and crunch's). The ones that are 'suspect' are systems that are obviously running manipulated clients that were compiled to report completely false benchmarks as some individuals have done. |
Message boards :
Number crunching :
Are INTEL systems not getting enough credit or are AMD systems getting too much?
©2024 University of Washington
https://www.bakerlab.org