Why do I receive much less credit than my client claims?

Message boards : Number crunching : Why do I receive much less credit than my client claims?

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Profile Feet1st
Avatar

Send message
Joined: 30 Dec 05
Posts: 1755
Credit: 4,690,520
RAC: 0
Message 40486 - Posted: 7 May 2007, 16:45:07 UTC

Hmmm, in searching I came across this post from Dr. Baker saying that Rosetta was ported from FORTRAN to C++. I thought I remembered a post by another member of the project team saying there was still some FORTRAN, but have not found it.

It would seem likely that a C++ compiler with the desired optimizations exists. Perhaps you could create a thread to discuss that. And rather then pose it is a credit issue, just point out that you could complete more models if optimizations were made.
Add this signature to your EMail:
Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might!
https://boinc.bakerlab.org/rosetta/
ID: 40486 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 40494 - Posted: 7 May 2007, 20:05:30 UTC - in response to Message 40486.  

Hmmm, in searching I came across this post from Dr. Baker saying that Rosetta was ported from FORTRAN to C++. I thought I remembered a post by another member of the project team saying there was still some FORTRAN, but have not found it.

It would seem likely that a C++ compiler with the desired optimizations exists. Perhaps you could create a thread to discuss that. And rather then pose it is a credit issue, just point out that you could complete more models if optimizations were made.



If I remember it was the coding of part was still in a fortran style, rather than a C++ style, not the actual language itself.
Team mauisun.org
ID: 40494 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile adrianxw
Avatar

Send message
Joined: 18 Sep 05
Posts: 653
Credit: 11,816,586
RAC: 1,441
Message 40517 - Posted: 8 May 2007, 7:32:25 UTC

If an application is coded in a Fortran like procedural style, a decent compiler should be still able to produce tight code. Optimisations are typically done after the language parsing. C++ coding per-se will not produce better object code. What it does is makes some aspects of development in large programs easier because the OO paradigm is regarded as superior.

I programmed with various Fortran marks for 18 years. It is quite possible to write and maintain very large software packages without OO.
Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 40517 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1830
Credit: 119,208,549
RAC: 2,517
Message 40521 - Posted: 8 May 2007, 10:25:42 UTC - in response to Message 40484.  

However, there must be something else going wrong here because most results I sent receive much less credit than the client claims and there is no other computer that did the same WU, so there is also no consensus like in other projects (3 results required, the middle claimed credit granted to all 3).

The consensus is from the other computers that did the same Work Units - although they start from different points, all tasks within a particular WU will take a similar amount of time and resources (there are exceptions, but it's a pretty good system). The other computers that crunched the same WUs as you must have claimed less credit than your machine - hence your granted credit being reduced.

I don't know of a way to filter the results to show all the jobs within a WU so you can compare your scores - i did it by downloading all the info into Excel and filtering that way, but that was a while ago now.

The xenon (and CellBE) use a form/subset of altivec so maybe/hopefully the dev team will work on improving this.

HTH
Danny
ID: 40521 · Rating: -1 · rate: Rate + / Rate - Report as offensive    Reply Quote
Martin P.

Send message
Joined: 26 May 06
Posts: 38
Credit: 168,333
RAC: 0
Message 40607 - Posted: 9 May 2007, 22:53:38 UTC - in response to Message 40521.  

However, there must be something else going wrong here because most results I sent receive much less credit than the client claims and there is no other computer that did the same WU, so there is also no consensus like in other projects (3 results required, the middle claimed credit granted to all 3).

The consensus is from the other computers that did the same Work Units - although they start from different points, all tasks within a particular WU will take a similar amount of time and resources (there are exceptions, but it's a pretty good system). The other computers that crunched the same WUs as you must have claimed less credit than your machine - hence your granted credit being reduced.

I don't know of a way to filter the results to show all the jobs within a WU so you can compare your scores - i did it by downloading all the info into Excel and filtering that way, but that was a while ago now.

The xenon (and CellBE) use a form/subset of altivec so maybe/hopefully the dev team will work on improving this.

HTH
Danny


Danny,

please click the links I provided!!! NO other computer crunched the same work-unit, that's the problem! I was THE ONLY ONE who crunched these work-units and still I receive less credit than I my client claims. That's what this whole thread is about.



ID: 40607 · Rating: -1 · rate: Rate + / Rate - Report as offensive    Reply Quote
Ethan
Volunteer moderator

Send message
Joined: 22 Aug 05
Posts: 286
Credit: 9,304,700
RAC: 0
Message 40628 - Posted: 10 May 2007, 5:19:09 UTC

Here's info I sent out answering the same question a couple weeks ago:

There are several calculations going on to determine 'credit'. Boinc itself has a benchmark that determines how many calculations a given computer can do per second. When you first install Boinc it will benchmark your system doing a bunch of integer adds (like 1+1) and floating point divides (1.324 / 6.22222). . . very simple calculations that don't depend on the amount of memory or disk speed of your computer.

Boinc calculates the number of credits you 'claim' based on these benchmarks. If (and I'm making numbers up since I don't know the exact equation) a computer that has an integer benchmark of 1000 works for an hour, Boinc would claim 10 credits. A computer that had a benchmark of 2000 would claim 20 credits for the same hour since Boinc has determined it does twice the amount of calculations.

The kink in the Boinc credit system is that it doesn't accurately calculate how fast a given computer is at running the Rosetta science application (which is huge compared to simple addition or division). The Boinc benchmark doesn't accurately reflect how much data users are actually sending back.

The project team decided to create a new benchmark that would grant credits based on how much data users actually returned to the project. The easiest way to do this was to start with the Boinc credit 'claim'.

R@H runs many different work units at once, you can see this when you look at the work units in the Boinc Manager. For each, a running average of the Boinc credit claim is kept per simulation (a work unit will do as many simulations as it can within the timeframe you specified). This average is used to determine how many credits a user gets at the time they return their results. The credits given are 'credit granted', what Boinc claims is 'credit claimed'.

The per-simulation credit value is allowed to change as results come in to more accurately determine the number of credits that should be granted. This change is relatively small over time however, the 10th person to return a simulation is going to get a value close to the 10,000th.

Now we get rid of the Boinc benchmark, the only thing that matters is how fast a computer can complete a simulation. . . it will get the same number of credits regardless of how long it takes. With all other things being equal, a 2ghz machine will take twice as long (and get 1/2 the credits/hour) as a 4ghz machine (same cpu, just changing frequency)

Clock speed isn't the only thing that determines how fast a given calculation takes. Lack of RAM will obviously slow things down, as will a slower or fragmented hard drive. Architecture differences also come into play (the amount of cache a cpu has for example).

In the end, the amount of credits you receive is the sum of all these variables. If your machine is always getting less than the 'credit claimed', it's because of the Boinc benchmark not taking the type of work Rosetta is doing into account for your system.
ID: 40628 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Martin P.

Send message
Joined: 26 May 06
Posts: 38
Credit: 168,333
RAC: 0
Message 40643 - Posted: 10 May 2007, 12:18:53 UTC - in response to Message 40628.  
Last modified: 10 May 2007, 12:36:36 UTC

In the end, the amount of credits you receive is the sum of all these variables. If your machine is always getting less than the 'credit claimed', it's because of the Boinc benchmark not taking the type of work Rosetta is doing into account for your system.


Ethan,

thanks for the explanation. My current benchmark results are (BOINC client 5.8.17):
Measured floating point speed: 1083.84 million ops/sec
Measured integer speed: 3446.62 million ops/sec

Using the optimized BOINC client 5.4.9 my benchmarks go up by the factor of 3 i.e. appr. 3,000 FPU and 10,000 Integer. However, granted credit remains the same although claimed credit goes up significantly. Therefore the granted credit must be completely independent from the benchmarks.

...it will get the same number of credits regardless of how long it takes. With all other things being equal, a 2ghz machine will take twice as long (and get 1/2 the credits/hour) as a 4ghz machine (same cpu, just changing frequency)


Now we come a little closer to the issue I tried to explain for days now (and nobody wants to understand): 2 computers with the same specifications should take the same amount of time to perform a certain number of calculations and therefore should claim and receive the same amount of credits. This it true for ALL projects that offer clients for different platforms, except for Rosetta@Home. In all other projects the differences in credits/time are in a range of ±10% between comparable computers, even for different architectures and operating systems. However, in Rosetta@Home the differences are bigger than 100% between Windows-based systems (compare the hosts I linked to here: Message ID 40399) and MacOS X machines with PowerPC processors - and this is simply due to very bad programming/optimization of the science application (they even admitted this: Message ID 26330). This has NOTHING to do with the BOINC client or benchmarks!


ID: 40643 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1830
Credit: 119,208,549
RAC: 2,517
Message 40645 - Posted: 10 May 2007, 13:34:32 UTC - in response to Message 40643.  

Now we come a little closer to the issue I tried to explain for days now (and nobody wants to understand): 2 computers with the same specifications should take the same amount of time to perform a certain number of calculations and therefore should claim and receive the same amount of credits.

This is true, but you're assuming PowerPC architecture is comparable to x86 architecture for R@H. We all understand that R@H isn't well optimised for PPC, as posted previously, and we also understand why they haven't made this a priority, also as posted. If someone were willing to look at improving the code for PPC I'm sure the project team would be willing to let them have a go.


This has NOTHING to do with the BOINC client or benchmarks!

Your claimed credit does! As Ethan posted, your claimed credit is based on your benchmarks * time. As your benchmarks are higher than your Rosetta throughput is (due to the lack of optimisation for altivec etc), then your claimed will be higher than your granted... so your granted is low because of a lack of optimisation for PPC, and your benchmarks (and therefore claimed credit) are high as the benchmark has little in common with the R@H code.

I think we all understand the issues here, but are looking at it from different perspectives.

P.S. I get 'No Access' when i try to access:
https://boinc.bakerlab.org/rosetta/results.php?userid=84658.


ID: 40645 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Astro
Avatar

Send message
Joined: 2 Oct 05
Posts: 987
Credit: 500,253
RAC: 0
Message 40648 - Posted: 10 May 2007, 13:57:22 UTC - in response to Message 40645.  
Last modified: 10 May 2007, 13:58:21 UTC

P.S. I get 'No Access' when i try to access:
https://boinc.bakerlab.org/rosetta/results.php?userid=84658.


Because that link came from "inside" his account, which you can't get to without his email and password. By clicking on his username you get to https://boinc.bakerlab.org/rosetta/show_user.php?userid=84658. This works, and from there you can get to his "results" page.
ID: 40648 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Ethan
Volunteer moderator

Send message
Joined: 22 Aug 05
Posts: 286
Credit: 9,304,700
RAC: 0
Message 40661 - Posted: 10 May 2007, 17:24:26 UTC - in response to Message 40643.  
Last modified: 10 May 2007, 17:24:50 UTC

2 computers with the same specifications should take the same amount of time to perform a certain number of calculations and therefore should claim and receive the same amount of credits. This it true for ALL projects that offer clients for different platforms, except for Rosetta@Home. In all other projects the differences in credits/time are in a range of ±10% between comparable computers, even for different architectures and operating systems. However, in Rosetta@Home the differences are bigger than 100% between Windows-based systems and MacOS X machines with PowerPC processors


Morning Martin,
I don't think this is quite right, architecture and OS differences do play a big role. A 486 overclocked to 2ghz isn't going to be within 10% of a 2ghz Core 2 (neglect the whole 486 melting soon after being upped to 2ghz). Similarly, PPC cpus are an older design. . new chips have more processing units within each core, faster access to memory, better branch prediction, etc. Similar differences exist between operating systems (although the differences aren't as significant).

ID: 40661 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Previous · 1 · 2

Message boards : Number crunching : Why do I receive much less credit than my client claims?



©2024 University of Washington
https://www.bakerlab.org