How much has your RAC Dropped Since 12/6/06

Message boards : Number crunching : How much has your RAC Dropped Since 12/6/06

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · Next

AuthorMessage
MikeMarsUK

Send message
Joined: 15 Jan 06
Posts: 121
Credit: 2,637,872
RAC: 0
Message 34053 - Posted: 4 Jan 2007, 0:28:46 UTC


From 1325 to 1220 is a drop of 9.2%, it's hardly significant...


ID: 34053 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 11 Feb 06
Posts: 316
Credit: 6,621,003
RAC: 0
Message 34061 - Posted: 4 Jan 2007, 6:46:06 UTC - in response to Message 34053.  


From 1325 to 1220 is a drop of 9.2%, it's hardly significant...


It's not? What's the definition of significant? 20%? 50%?
Reno, NV
Team: SETI.USA
ID: 34061 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
MikeMarsUK

Send message
Joined: 15 Jan 06
Posts: 121
Credit: 2,637,872
RAC: 0
Message 34069 - Posted: 4 Jan 2007, 8:44:02 UTC


Closer to the latter than the former I'd have thought.


ID: 34069 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile trevieze
Avatar

Send message
Joined: 8 Apr 06
Posts: 10
Credit: 542,792
RAC: 0
Message 34094 - Posted: 4 Jan 2007, 17:16:31 UTC

I have looked at my computers again -

Poweredge 6450 Linux CentOs 4.2 Claimed 62, Granted 109
Tyan dual Athlon MP 2000+ Win2K Claimed 207, Granted 124

I know there is a processor parity issue between the two, but the granted/claimed should be closer?

The graph on the tyan looks like this "_"
ID: 34094 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
MikeMarsUK

Send message
Joined: 15 Jan 06
Posts: 121
Credit: 2,637,872
RAC: 0
Message 34100 - Posted: 4 Jan 2007, 17:26:03 UTC - in response to Message 34090.  
Last modified: 4 Jan 2007, 17:29:21 UTC

Significant, IIRC this is when the difference exceeds the margin of error. So 10% could certainly be significant given a sufficient precision and a decent cross-section of the results. But I think MikeMarsUK gives it a different definition, more something like; "I think a 10% difference is completely acceptable."


I used an entirely nonstatistical definition

- Would I care about a 10% drop in my credit? nope.

- Would I care about a 50% drop in my credit? maybe. But I have spent a lot of CPU hours on beta projects which don't publish credit and therefore been happy to accept a 100% drop in credit.

If there were some magic way to convert my 360,000 credit into another few completed models for the CPDN project, I wouldn't hesitate.

In different circumstances 10% would be something I care about - for example, in the other thread about model performance, where the difference would be 10% extra scientific work for the project, adding 10% would be a major advance, and even 5% would be very significant to me. (Note the 'me' - this is an entirely personal judgement and other people will have different opinions).

ID: 34100 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 34103 - Posted: 4 Jan 2007, 18:20:27 UTC

I would put 10% under the insignificant, my reasoning is we know credit per model vary much wider than this.


But I still this as follows is happening
To me it seems to be the AthlonXP and maybe some other specific types of processors are getting hit by this. As I said before many post above (or below depending on you board preference ;) it could just be the processors cannot compute the current task relative to the other processors, as fast as it could on other previous tasks (maybe due to this revision or the actual type of calculation)

This is a problem we will come across with 'work based' crediting, they just are not doing as much 'work' as they used to.
Maybe it's hitting 256MB caches harder than before or the brute force is just not there for the type of calculation. It will happen as the code changes and evolves.

It may of course be something else, but this is my assumption, we'll see what the R@H team can find.
Team mauisun.org
ID: 34103 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Stevea

Send message
Joined: 19 Dec 05
Posts: 50
Credit: 738,655
RAC: 0
Message 34112 - Posted: 4 Jan 2007, 20:19:47 UTC
Last modified: 4 Jan 2007, 20:26:12 UTC

From 1325 to 1220 is a drop of 9.2%, it's hardly significant...

Add this to the drop from over 1450 from Predictor with their quorum of 3 wu's.


Maybe it's hitting 256MB caches harder than before or the brute force is just not there for the type of calculation.

I'm sure you meant 256k but these are Barton core CPU's, and they have 512k cache.


I would put 10% under the insignificant, my reasoning is we know credit per model vary much wider than this.

Then it should even out, it has not.

9.2 percent to a scientist is a big deal. I think that the cross platform / project credit system would consider this a big deal.

I consider a drop from 1450 to 1325 a big deal, and I consider a drop from 1325 to 1225 a bigger deal in the overall Boinc picture.

Why would some one want to come here and crunch for 1200 PPD when they can go to another disease based Boinc Project and get 1450 PPD?

To put it bluntly, some people do care about how much credit they earn, and this project in it's current credit granting formulation is not on the top of the list of projects to sign up for.

If a 20% variance in credit is allowable, across projects, why not be on the high side and get more people to come and crunch your project? Some people are not going to come here and crunch this project for 20% less credit than they can get from another.

So it may not be a big deal to some, but a 10-20% difference is a big deal to many.

BETA = Bahhh

Way too many errors, killing both the credit & RAC.

And I still think the (New and Improved) credit system is not ready for prime time...
ID: 34112 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 34116 - Posted: 4 Jan 2007, 21:22:35 UTC

Hence why I stated the current drop could be because you are doing relatively less work. (and yes kB not MB oops :)

If you are after the credits then yes it maybe benifical to go somewhere else, for example at ABC@Home the 64bit client is 3x faster than the 32bit client, last time I heard they will be giving the 64bit client users the appropriate 3 times of credit.

The same way people with SSE, SSE2, SSE3 and soon SSSE3 users would want to go to seti@homeif they wanted to legitimatly gain credit quicker as the client is going through task much quicker (so doing 'more scientific work').

My Pentium-M is granted more than cliamed so I am getting more PPD here than at other projects.
It is always going to happen.

The choice comes to this
decide on a project that suites you
or
decide on a project that suites you processor.

Me I go for the suties me.



Stevea, what I still cannot undertand is why you mention other project getting more since you only run Rosetta@home (predictor is no more). so you are only comparing to other people here (which would be the same if you went to f@h, you would only be comparing to other f@h users)
and since you are only comparing to other rosetta users then you are getting the correct credit you deserve (wrt work done) compared to other rosetta users.
Since we are all in the same boat.
That is what I cannot understand.


Like I said you may want to get out now if Who? is really causing a problem ;-)
This is the boinc plan for credits
Here's a road map for improving the credit system:

1) near term: add support for "benchmark workunits".
The time it takes to machine to finish the benchmark WU
determines how much credit per CPU second it gets for subsequent WUs.
If all projects use this, we can get rid of CPU benchmarking
in the core client (which is the root of many problems).

2) longer term: add support for project-defined "credit pricing".
This lets project say how they want to assign credit,
e.g. a host with 4 GB RAM gets twice the credit per CPU sec
of a 1 GB machine.
Or credit for long-term disk storage.
Or they can give credit for non-computation stuff like
recruiting participants or doing good message board posts.
The only restriction is that the maximum possible credit per computer-day,
averaged over all computers currently participating in any BOINC project,
is 100 (or some constant).

-- David


Note the last part off Number2, that is like what is happening here.
Our 'constant' is the claimed credit all added up for the day, this claimed credit is shared out among all the participant relative to the amount of science work they have done.
(this is effectivly how our system works, hence again why you are getting less now because Who? is getting more)


Team mauisun.org
ID: 34116 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Nothing But Idle Time

Send message
Joined: 28 Sep 05
Posts: 209
Credit: 139,545
RAC: 0
Message 34117 - Posted: 4 Jan 2007, 21:29:02 UTC - in response to Message 34112.  
Last modified: 4 Jan 2007, 21:29:21 UTC

...Some people are not going to come here and crunch this project for 20% less credit than they can get from another.

The choice is yours, or theirs.
...So it may not be a big deal to some, but a 10-20% difference is a big deal to many.

Editorial: There has been much discussion and turmoil over the subject of credit this past year...mostly about what constitutes "fair" or "enough". The wounds left behind is a testimonial to the notion that a drop in RAC can stir the emotions of many and become worthy of "discussion" or "war". We just have to accept that some people care more or less about RACs and that is just the nature of the beast. One camp of thought should not disdain the other camp.
ID: 34117 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
AMD_is_logical

Send message
Joined: 20 Dec 05
Posts: 299
Credit: 31,460,681
RAC: 0
Message 34126 - Posted: 4 Jan 2007, 22:45:43 UTC

This drop is very puzzling. A number of people see it on their machines, but I don't see it on mine.

Do people who see the drop have the automatic windows update running? (I wouldn't be surprised if Microsoft put out an update that diverted cycles to who-knows-what.)

If the drop is due to less work being done (for whatever reason), then it is hurting the science and should be tracked down.
ID: 34126 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 34134 - Posted: 4 Jan 2007, 23:52:21 UTC
Last modified: 4 Jan 2007, 23:52:52 UTC

My RAC has dropped since Dec 14th since the program has stopped running from time to time, and has not been generating 86,400 seconds of work a day (or 604,800 seconds of work a week).

Stevea gives the impression that he's still producing the same number of seconds of work a day - just getting less credit for it.

So there's multiple causes for the lower RAC of those of us seeing lower RAC.

And while this probably isn't Stevea's problem - getting infected with spyware/worms/virii that were being spammed the last month would also slow systems down. Or installing a cpu hog of an antivirus app like Norton's.

It takes a bit of work to prove and identify the source of the problem. My system uses windows automatic update to download new updates - but I believe my problems showed up with 5.4.3.






ID: 34134 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Stevea

Send message
Joined: 19 Dec 05
Posts: 50
Credit: 738,655
RAC: 0
Message 34135 - Posted: 4 Jan 2007, 23:59:14 UTC

As I stated before, I am not a credit whore, I'm still here and crunching because I believe in what this project is trying to achieve. I lost a family member to cancer, and I will only crunch disease related projects. If I was a credit whore I would have been here crunching Rosetta when they were granting whatever credit you claimed, instead of crunching Predictor with their quorum of 3.

I actually came here about 2 weeks before Predictor shut down because they could not keep their servers running for more than 2 weeks at a time. And this project looked like the best project running at the time. I also liked that the project managers were keeping everyone informed about the progress and direction that they were taking.

Now the war was raging, the 2 biggest points brought up to change the credit system were over claiming clients, and cross project pairity. Everyone who says this was not true, better go back and reread the threads. Because there is no pairity.

If parity was not an issue then there was no reason to change the credit system in the first place. If you were only running one project, who cares how much credit they awarded. But this is Boinc the argument went and there has to be cross project parity, so how much one project grants compared to another will have to enter into this discussion. Not by my doing, but by the doings of others that had the credit system changed to their vision of what Boinc should be.

Thats why I keep bringing up Predictors credits, because they have a quorum of 3 wu's to award credit. I understand that there is no reason for a quorum system here. Here it's a different system and it's not nearly close to the quorum's way of awarding credit. So this projects credit must be compared to the other projects credit. It was one of the reasons given for changing it.

Now it's something completely different.

Something happened 3-4 weeks ago and credits awarded to some people changed. Not everyone, some. Some have recovered others have not. Some have gone up, others have gone down. But something changed.

That was the reason I started this thread, I noticed that my, and other members of my teams credit took a nose dive. I wanted to see if it was just me, or were others effected also? Was there something wrong with all 4 of my machines at the same time? Why have my machines remained constant for the last 8 months, but all of a sudden dropped? What changed?

I simply wanted to know why?
BETA = Bahhh

Way too many errors, killing both the credit & RAC.

And I still think the (New and Improved) credit system is not ready for prime time...
ID: 34135 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Who?

Send message
Joined: 2 Apr 06
Posts: 213
Credit: 1,366,981
RAC: 0
Message 34144 - Posted: 5 Jan 2007, 3:12:55 UTC - in response to Message 34135.  

guys, remember? it is all about fun here, while helping for a good cause :)

I am fairly happy of my 3800 RAC on my monster, it went put pretty well... the best AMD machine is around 1600 ... hehehehehe, can't wait to see what K8L can do :)

I will be disconnecting my XEONsss this week, and we will see if i am the cause of all your pain ...

guys, i ll be at CES, see you there!
who?

ID: 34144 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FluffyChicken
Avatar

Send message
Joined: 1 Nov 05
Posts: 1260
Credit: 369,635
RAC: 0
Message 34157 - Posted: 5 Jan 2007, 11:27:04 UTC

stevea,
absolutely it was about
1) credit cheating (manipulating files)
secondly though it was about fairer credit, so people got credited for work done not what a bechmark says you should get.

Cross project parity of course was wanted by the multiproject users, me I was not so bothered as long as it was in general close enough (but like I said that's always going to be on a 'project view' not an 'individual cpu' view, since the latter cannot be achieved.

Me I just want it to be harder to bugger up the credits leaving then completely pointless (file changing, client altering) and so first and foremost everyone is in the same boat and has to use the same system.

I'll see if I can get a AthlonXP to be running for some peroid of time and see how it is faring up compared to it's cliaimed credit.

AMD_is_logical, I don't think it actually less work being done, but only relative compared to the other rosetta people. Be it a bug, the type of work. But since I cannot keep mine on 24/7 at the mometn it gets hared to track. I'll have a look at my team mates, they have a nice selection.

Who? good luck on your FFT crunching :-)
Team mauisun.org
ID: 34157 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Who?

Send message
Joined: 2 Apr 06
Posts: 213
Credit: 1,366,981
RAC: 0
Message 34199 - Posted: 6 Jan 2007, 5:54:38 UTC - in response to Message 34157.  

I am shifting my machines to an other project for the next 2 weeks, let s see if it does changed the scores of other people.

The V8 machine is off, it MAX RAc went up to 3850 :)


who?
ID: 34199 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1831
Credit: 119,450,636
RAC: 10,833
Message 34206 - Posted: 6 Jan 2007, 12:55:03 UTC

I believe the fall in RAC seen in some machines can be explained simply by an increase in average cache size (general trend for both AMD and Intel) and the introduction of more efficient processors (C2x). As the whetstone and dhrystone benchmarks haven't been improved at the same rate that the true processing ability of the processors has, these faster CPUs will be putting in lower claimed credits, thereby dragging the average credits down.

Another factor might be a greater reliance on cache by the newer Rosetta WUs which will cause those CPUs with larger caches to be less affected.

Who: removing your machines will have no real effect on this as it's due to the mean effect of all computers that are returning results.

HTH
Danny
ID: 34206 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Nothing But Idle Time

Send message
Joined: 28 Sep 05
Posts: 209
Credit: 139,545
RAC: 0
Message 34208 - Posted: 6 Jan 2007, 15:06:11 UTC

@who?
I am shifting my machines to an other project for the next 2 weeks, let's see if it does changed the scores of other people.
Even if some RACs increase you can't prove it resulted from your action. There are too many variables and virtually no constants.

@dcdc
You offer rational and plausible theories though I don't know how we can prove/disprove them. A number of things may collectively be contributing to the observed downtrend in RACs; e.g., termination of the boinc daemon, the introduction of core 2, and new protein studies that may effect efficiency. Seems to me we will have to wait and see what happens over the next month or two and see if the trend reverses, is constant, continues down. My Rosetta RAC has gone up because Einstein has been down several days recently and I have nothing but Rosetta to process. And I have taken steps to mitigate the infamous boinc client terminations in the middle of the night by suspending network activity unless I'm available to monitor it. For the last 2 months my RAC has been all over the charts and totally erratic.
ID: 34208 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Feet1st
Avatar

Send message
Joined: 30 Dec 05
Posts: 1755
Credit: 4,690,520
RAC: 0
Message 34258 - Posted: 7 Jan 2007, 3:58:50 UTC

One idea I don't think we've discussed yet is that Rosetta has two task sizes. Those for machines with <512MB I think it is, and those that will run only on machines with 512MB or more. The reason they flag some tasks as needing a larger memory machine is because the running thread has a significantly larger need for memory. This then implies that there are more memory accesses, and that speed of memory access may become a more significant component of runtime on those tasks.

Could the project team respond here?
1) Is it all the "docking" tasks that need the higher memory PCs?
2) Are there other tasks as well that require higher memory?
3) Is there any divergence between the ratio of claimed vs granted credit on the high memory tasks as compared to the ratio of normal memory tasks?
4) Since the high memory tasks require more resources to run, is there any adjustment made to how credit is granted? I guess I'm saying it seems like perhaps these tasks should be awarded a litle more credit then a task that requires less memory. Since the BOINC benchmark doesn't account at all for memory use, the PCs that crunch the high memory tasks will not CLAIM anything more then they would for a normal memory task. So, it would require adjustment on the server to grant such "premium credit" for crunching high memory tasks.
Add this signature to your EMail:
Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might!
https://boinc.bakerlab.org/rosetta/
ID: 34258 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Stevea

Send message
Joined: 19 Dec 05
Posts: 50
Credit: 738,655
RAC: 0
Message 34261 - Posted: 7 Jan 2007, 5:21:02 UTC

Just for clarification.

3 - machines have 2x512 = 1g dual channel @2-2-2-11
1 - machine has 1x512 single channel @2-2-2-5

All have dedicated video cards 2-5200's & 2-6200's / no on-board video!

My credit has begun so sneak up a little, 3 days ago, we'll see if it continues to climb, or if it just a bump. Before who? shut his machines down.

All 4 machines are running the latest DirectX updates.

Steve
BETA = Bahhh

Way too many errors, killing both the credit & RAC.

And I still think the (New and Improved) credit system is not ready for prime time...
ID: 34261 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Who?

Send message
Joined: 2 Apr 06
Posts: 213
Credit: 1,366,981
RAC: 0
Message 34262 - Posted: 7 Jan 2007, 7:02:10 UTC - in response to Message 34261.  

I think the top 1 machine gets used in the equation of the scores.

1 reason to think so, my Top 2 machines dropped immediatly when i stopped it, while the Top 1 machine did not drop 1 unit yet (3,855.88 after 2 days)
my 2 other machines dropped too immediatly.

There is obviously a special treatement for the machine 1.
I know many people already told me that the scoring does not work this way, but i can t explain the RAC of Top 1 with what was explained to me.


who?
ID: 34262 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · Next

Message boards : Number crunching : How much has your RAC Dropped Since 12/6/06



©2024 University of Washington
https://www.bakerlab.org