Message boards : Number crunching : 32-bit Windows XP vs 64-bit Linux
Author | Message |
---|---|
Warped Send message Joined: 15 Jan 06 Posts: 48 Credit: 1,788,185 RAC: 0 |
I know this subject has been discussed before but cannot find that any conclusions were reached. I run a machine with dual boot Windows XP 32-bit and Ubuntu 64-bit. I am finding that the Linux setup is giving about one-third more credit for the same run time. I have also noticed that the Linux system claims about double the credit claimed under Windows and is consistently awarded less credit whereas the Windows set-up claims less than it is awarded. What I am interested in is whether it's the 64-bit or the linux (or maybe both?) which is giving the performance boost. Perhaps there's no difference in the actual benefit to the project but the anomoly is the credit claiming and granting system. Any comments? Warped |
dcdc Send message Joined: 3 Nov 05 Posts: 1831 Credit: 119,452,852 RAC: 11,025 |
Credit granted is fairly well aligned to work done so the one with the highest average granted credit per cpu-second or cpu-hour is the most efficient. No idea whether that's down to a 32 vs 64 bit OS, but the Rosetta application is always 32-bit. The easiest way to check is to drop a page of results into a spreadsheet and work out the average of granted credit divided by cpu time for each OS. HTH Danny |
Warped Send message Joined: 15 Jan 06 Posts: 48 Credit: 1,788,185 RAC: 0 |
Hi Danny That's exactly what I did. They both use a 4-hour runtime preference and have both been averaging within a few percent of 14400 seconds per work unit. I am aware that the the application is only 32-bit, but was thinking that perhaps the ability of a 64-bit system to address the memory was part of the issue. Ideally, someone who also has a dual-boot but with both running 64-bit or both 32-bit could comment. Regarding the credit claimed issue, you're saying that the credit granted is an accurate reflection of the relative work done. This means that the Ubuntu O/S is "doing more work", which gets back to my curiosity about how it achieves this. Warped |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Credit comparisons are always ominous because credit can be very dependent upon the exactly protocols used in a given work unit. For example one work unit might use significantly more memory than another. A machine a bit short on memory due to that consumed by other active applications, or physical lack of resource, may find the task tougher to crunch than another machine with more memory available to BOINC. By definition, the two machines will never get exactly the same mix of tasks. The best frame of reference is a very long period of time, and subgrouping by task names and comparing averages only within the subgroups, and then see if you can find any consistency across subgroups. Before attempting any conclusions, blind yourself as to where the tasks came from and see if you can reach similar result. What if you just randomly cut the list in half and tally up the two? What if you change the time period studied? What if you omit a subgroup? Does it change the demonstrated relative performance between the two machines? What if you take only tasks from the Linux machine and randomly split each subgroup in half, and pretend that the first half is a different machine than the second half. Is there a measurable disparity between the two halves? ...and if there is, it shows you that the variation between specific tasks even of the same subgroup can lead you to misleading conclusions. Rosetta Moderator: Mod.Sense |
Message boards :
Number crunching :
32-bit Windows XP vs 64-bit Linux
©2024 University of Washington
https://www.bakerlab.org