Posts by Dagorath

1) Message boards : Cafe Rosetta : Task suspended by user (Message 59673)
Posted 19 Feb 2009 by Dagorath
Post:
Hi! I have been running SETI, Rosetta and Einstein on my new Intel 2.2 gHz Dual Processor computer. A couple of weeks ago I changed my preferences from 33% each to 20% SETI and 40% each for the other two, as SETI was giving me too many data tasks to complete in time.


If any of your projects are giving you so many tasks that you can't complete them all in time then you should look at the size of your task cache and maybe consider decreasing the size of your cache. Decreasing the offending project's share probably won't cure the problem of getting more work than you can complete on time.

Also, which version of BOINC are you running? BOINC 6.4.5 and later versions have bugs that may be affecting you. Unless you want to process tasks on nVidia CPU, you're probably better off staying with version 6.2.18.
2) Message boards : Number crunching : LHC@home gives BOINC a bad name (Message 59453)
Posted 8 Feb 2009 by Dagorath
Post:
For me it doesn't matter whose loss it is because waste is never a good thing. If the waste can be eliminated with minimal effort, as it can be at LHC@home, then to not do so is just lazy and irresponsible.

let me tell you that I think you're right.


It is only waste if they really only need a quorum of 3 ... if they want / need a quorum of 4 then there is no waste at all ...


But you haven't even begun to show they need a quorum of 4. Them wanting a quorum of 4 is just a story you pulled out of the air after all your other reasons for supporting 5/3 were shown to be ridiculous.

In any case, it is a red herring because there are other source points of waste on BOINC projects and we do not see those projects being dragged through the mud.


LHC@home has had over 2 years to eliminate the waste they create with their ridiculous 5/3 policy. The fix takes about 5 minutes but they have done nothing except implement cancels which many of us correctly predicted would never fix the problem.

There are other projects wasting CPU cycles and they are trying to fix the waste. If they were not then I would be going after them too but at least they are trying. LHC@home isn't even making an honest effort.

There is really no point to this "discussion" in that the two sides have been presented and you can pick your side and what you want to believe.


Why is it a "discussion" instead of a discussion? Because nobody agrees with you? Lol!

In the LHC Forums one participant asked a pertinent question ... who to believe?

I suggest you look at their tone. Are they respectful of those that they do not agree with? Do they acknowledge errors? Do they call other participants names?


Good advice. In this discussion you have twisted every word I've said and accused me of saying things I never said and of actions I've never done, for example accusing me of saying LHC@home is involved in a nefarious plot to deprive other projects of CPU time.

You've made up totally ridiculous reasons for keeping the 5/3 policy and have attempted to convert your speculation into fact with absolutely zero proof. In fact you are telling us that we should believe you because you stated your lies and twisted logic politely and that I am impolite by exposing your lies and nonsense for what it is.

You still haven't explained why LHC@home needs a quorum of 4 when they obviously think they need a quorum of only 3. Nor have you explained why they would use such a convoluted and inefficient method as the 5/3 policy for getting 4 when they could just use the far simpler 4/4 method. It would take less network bandwidth, smaller database and fewer resources in general.

Read the posts, then decide ...


Indeed, read and ask questions and think. Buck's story falls completely apart in about 2 minutes when you understand how the quorum system works and if you think about it.

I, for one, see no reason to not do work for LHC@Home ... and if you don't agree, well, then there is more work for me ... :)


Go ahead and waste your precious resources on whatever folly spins your prop. Fools and their money are soon parted and sometimes there is no amount of common sense that can convince them otherwise. You seem to think there is some prestige associated with crunching LHC@home tasks. There is no prestige at all.
3) Message boards : Number crunching : LHC@home gives BOINC a bad name (Message 59432)
Posted 7 Feb 2009 by Dagorath
Post:


The discussion is importyant to Rosetta and all BOINC projects because LHC@ home needlessly wastes CPU time that would other wise go to other projects.



If the CPU time is wasted, why do you think it would go to other projects?


Good point. It would go to other projects if and only if the machine is attached to other projects. If the computer is not attached to other projects as well as LHC@home then of course the waste is almost a non-issue.

I peeked at over 100 user profiles at LHC@home and all of them were attached to other projects. I am not saying that proves 100% are attached to other projects. I am saying it appears that for most users the cycles wasted by LHC@home would go to other projects.


It just seems to be a more compelling argument for LHC, since if the CPU time is wasted, it is LHC's loss.



For me it doesn't matter whose loss it is because waste is never a good thing. If the waste can be eliminated with minimal effort, as it can be at LHC@home, then to not do so is just lazy and irresponsible.
4) Message boards : Number crunching : LHC@home gives BOINC a bad name (Message 59430)
Posted 7 Feb 2009 by Dagorath
Post:


The discussion is importyant to Rosetta and all BOINC projects because LHC@ home needlessly wastes CPU time that would other wise go to other projects.



If the CPU time is wasted, why do you think it would go to other projects?


Good point. It would go to other projects if and only if the machine is attached to other projects. If the computer is not attached to other projects as well as LHC@home then of course the waste is almost a non-issue.

I peeked at over 100 user profiles at LHC@home and all of them were attached to other projects. I am not saying that proves 100% are attached to other projects. I am saying it appears that for most users the cycles wasted by LHC@home would go to other projects.
5) Message boards : Number crunching : LHC@home gives BOINC a bad name (Message 59429)
Posted 7 Feb 2009 by Dagorath
Post:
Yesterday you apologized. Today you carry on with more confused nonsense. One doesn't need to have credits at LHC@home to see that there is 25% waste there and to understand why it occurs. That's all plain as day, in the same sense that one doesn't need to actually leap from a high bridge to know what the consequences are. Anyway, I have only 1 account at LHC@home and that account has credits, a fact which was obvious to you from my posts to you there, obvious because my credits and RAC appear just below my name to the left of all my posts there. There is not 1 post from a Dagorath in that thread that shows a 0 credit tally. Not 1 shred of evidence at LHC@home to support your claim, all the evidence at LHC@home refutes your claim, yet you argue your point for 2 days. Amazing.

The problem arose because you don't do your homework and you are confused. Let me point out all the other issues on which you are confused.

1) I have never implied that LHC@home is deliberately hurting other BOINC projects. They ARE hurting all the other BOINC projects due to the waste inherent in their IR 5 for Q 3 policy but the hurt is an unintentional side effect and not one of LHC@home's goals. And I have never said otherwise. Where you are confused is where you think readers are stupid enough to believe you when you twist my words around to make it appear that I said hurting the other projects is one of LHC@home's goals. You have no respect for the reades here, absolutely no desire to search for facts and present them to your fellow crunchers. On top of your lack of respect and deliberate lies you moan continually about how you get no respect. How confused can you be?


Well, I do find it interesting that for someone you have no respect for their research ability, you quote my numbers below ... if my research is that suspect, they why did you rely on it to make one of your points?


How quickly and conveniently you forget. I didn't, as you claim, rely on you. As I stated in an earlier post, Ingleside and I did the research first or at least reported it first, back at LHC@home. The survey you did later, or at least reported later, merely agreed with what Ingleside and I had already found and reported. You say you've found all the posts, well, go ahead and read it for yourself and post the evidence that proves otherwise. Right, the evidence in the written record once again refutes your nonsense.

Next, I never accused you of accusing LHC@Home of a deliberate policy to impact other projects.


Taken to task again, you're forced to change your story yet again. At the end of your message 59353 in this thread you state (bolding added by me):

Anyway, I thought that someone should present the rebuttal point to his assertion that there seems to be an evil nefarious plot by LHC@Home to deprive Rosetta@Home of computing resources ...


So what's your story today? If the "his" in "his assertion" doesn't refer to me then who does it refer to? Bugs Bunny?

As to insulting the readers here. I have only posted a rebuttal to your proposal to inform the readers that there are other considerations. As there always are.


Of course there are always other considerations but unfortunately for you none of your considerations concerning the 5/3 policy at LHC@home has held any water. The quotes above prove you've tried to put words in my mouth and then deny it. You've also asserted I am not qualified to comment but that's all been exposed for the lies and nonsense that it is. The fact that you obviously believed readers here would buy it proves you have no respect and think the rest of us are all dumber than dirt. It didn't work for Nixon, it ain't gonna work for you. No matter how politely you word your nonsense, your lies and your insults, they are still just lies, insults and nonsense.

The fact that you personally do not want to recognize that LHC@Home has operational and funding constraints does not mean they do not exist.


I confess that I was unaware of the fact that LHC@home is no longer doing work that is essential to design or operation of the collider and that they never will be again in the future (I believe you made that point, correct me if I am wrong). If that is true then I agree, there is certainly no hope for funding from CERN or LHC. I have always admitted that configuring the LHC@home server to issue resends to fast reliable hosts would take some time and tweaking (probably 100 man-hours or more, for new scripts and database mods) and perhaps some money. Since the money isn't going to happen and the only programmer at LHC@home appears to be old, frail and retired, I'll do the sensible thing and drop the proposal to turn on and configure the "issue resends to fast reliable hosts" function and just go with the other part of the proposal which is to shorten the deadline and replace the number of tasks per day limit with a per core limit on results in progress. Those changes would take less than 5 minutes.

So, I have tried to point out that these issues are there and that can explain why no change has happened.

Did you make the proposal? Yes.
Did the administrator take it up with the scientists? Yes.
Did they allow a change? No
Were you satisfied? No

And here we are. You got your account banned in LHC@Home not because they are covering up anything ... but because you do not seem to want to be reasonable. You say that you want a debate of the issue, yet, I think, that most readers of this thread might come to the conclusion that a fair debate of the issue is not what you are interested in.


Quite the other way around. The LHC@home admins did not take it up with the scientists. They took it up with the programmer. That's why I continue to press the issue. The thread at LHC@home was censored because I brought up facts and issues that embarass LHC@home because they point to incompetence, sloth and a cover up of the fact that they aren't doing anything vital to construction or operation of the collider yet continue to claim they absolutely MUST have fast turn arounds and therefore MUST have the 5/3 policy. They don't need fast turn arounds and even if they did they can have that with near 0 waste. That's the lies they've told and there is no disrespect in calling a liar a liar because liars deserve no respect.

2) You still haven't explained how shortening the deadline on a project's tasks makes the work units achieve quorum slower.


I never said that lowering the deadline achieved quorum slower. I did say that it would disenfranchise many participants that want to contribute to LHC@Home any may not have computers that are as fast as mine. So, as a compromise to allow the broadest participation the project came up with this formulation that is working for them and for most participants.

Your change would benefit people like me, and apparently you, who have fast computers with high speed connections and so forth to be able to grab more work. Which is the real motivation here (IMO).


Why would I want more LHC@home work? Hmmm?

And for Pete's sake... the fastest computers I have are old Athlon 64+ and P4, as you can see in the public records if you would just do your homework for a change. Please tell us exactly how the changes I propose would give me an advantage.

3) You have claimed that LHC@home uses the IR 5 for Q 3 policy because they actually want a quorum of 4 instead of 3. The easy and efficient way to get a quorum of 4 is to just set the quorum at 4 instead of 3. Why wouldn't LHC@home just do it that way? Especially since your way doesn't guarantee them a quorum of 4 anyway? That is your last remaining defense of the 5/3 policy. If you can't provide a reasonable answer to the question then you have lost the debate.


Because, as I have also stated, if they changed to a 4 quorum issue 5, then the quorum would be over 4. It would be a number between 4 and 5 ...


Not if they used initial replication 4 for quorum of 4.

If they changed it to issue 3, then for every instance where there was a miss in the initial set you have to re-issue the work and wait another deadline period. Again, cutting the deadline in half and an issue of 3, quorum of 3 might get the project back to the point where they already are ...


Wrong. No might about it. In fact they would be ahead of where they are now because now they have to wait for up to 14 days for quorum if 1 result needs a do-over. With a 3 day deadline they would have to wait a maximum of only 6 days, assuming in both cases the miss is made up on the first re-send. If the re-send also misses then the advantage to the 3 day deadline is even more pronounced.

So, by investing time and effort and money, and staff,


Setting the deadline to 3 days takes less than 5 minutes.

it is entirely possible that they would achieve the goal of getting the minimum quorum of 3 in roughly the same time they are now ... then again, they might not ... which is another point you refuse to accept.


I refuse to accept it because the "logic" which leads you to that erroneous conclusion is as absurd as the "logic" that says there are no bones in ice cream therefore the moon might be made of Swiss cheese.

There is a non-zero possibility that after the investment of resources that the project does not have, that the new "Dagorath Plan" does not work. Which is perhaps the most important point you refuse to accept... yet how many projects have made a change to achieve a particular end, that the change did not accomplish that end.


Keep in mind that the "Dagorath Plan" no longer includes the "issue resends to fast reliable hosts" item. Now tell us the names of the projects that shortened the deadline only to find that the quorums/batches took longer rather than shorter.

The last point is from the first part, but it is more appropriate to close on this note I think, to quote myself:

[quote]The posts and threads were not censored because of an interest in a "Cover up" or that they are not telling the truth. The censorship came about because of Dagorath's manner and tone.

Heaping abuse on the project scientists, administrators, and other participants that disagree with his analysis is the reason that the threads and or posts were moderated.

Just as if you become abusive here in Rosetta, LHC took action to maintain a civil atmosphere in their project.


Well, if you don't want to be called a piece of cheese then don't act like a piece of cheese. Or else just get used to people calling you cheese.

Oh, and where do I moan about getting no respect?


Where do you not?

6) Message boards : Number crunching : LHC@home gives BOINC a bad name (Message 59402)
Posted 6 Feb 2009 by Dagorath
Post:
@Mikey,

You are correct and I was wrong. My apologies to all for my error. The problem arose because, well, no matter ...




Huh? You swagger in here full of bluster about how I am rude and insulting and my tone is unacceptable. You insult us all with your lies and spin and refuse to engage in honest debate and answer questions then you say it doesn't matter? Doesn't matter because you're His Majesty and we're just trash?

The problem arose because you don't do your homework and you are confused. Let me point out all the other issues on which you are confused.

1) I have never implied that LHC@home is deliberately hurting other BOINC projects. They ARE hurting all the other BOINC projects due to the waste inherent in their IR 5 for Q 3 policy but the hurt is an unintentional side effect and not one of LHC@home's goals. And I have never said otherwise. Where you are confused is where you think readers are stupid enough to believe you when you twist my words around to make it appear that I said hurting the other projects is one of LHC@home's goals. You have no respect for the reades here, absolutely no desire to search for facts and present them to your fellow crunchers. On top of your lack of respect and deliberate lies you moan continually about how you get no respect. How confused can you be?

2) You still haven't explained how shortening the deadline on a project's tasks makes the work units achieve quorum slower.

3) You have claimed that LHC@home uses the IR 5 for Q 3 policy because they actually want a quorum of 4 instead of 3. The easy and efficient way to get a quorum of 4 is to just set the quorum at 4 instead of 3. Why wouldn't LHC@home just do it that way? Especially since your way doesn't guarantee them a quorum of 4 anyway? That is your last remaining defense of the 5/3 policy. If you can't provide a reasonable answer to the question then you have lost the debate.
7) Message boards : Number crunching : LHC@home gives BOINC a bad name (Message 59352)
Posted 5 Feb 2009 by Dagorath
Post:
I guess I'm still trying to follow how 2 out of 5 is 25%. But math wasn't my strongest subject. And if a person does not follow that most basic discussion element, they're not going down the rest of the stream with you.


Good point mod.sense. The numbers don't appear to add up to folks who are not familiar with LHC@home so it deserves some explanation.

LHC@home work units have an initial replication of 5 for a quorum of 3 (IR 5 for Q 3). That leaves 2 potentially redundant tasks. LHC@home attempts to cancel redundant tasks if they can but they get canceled if and only if they have not started crunching when the host contacts LHC@home server. It turns out that in practice, 1 of the 2 potentially redundant tasks almost always gets canceled but the other one gets crunched in spite of the fact that the work unit has already achieved the quorum of 3. So you have, on average, 1 task out of 4 that is wasted effort, 1 in 4 = 25%.

Paul D. Buck prefers to downplay the waste by saying that 1 of the 5 initial replicas is waste so the number is actually 20%. I prefer to calculate the percentage based on the actual number of tasks that get crunched which is 4 in most cases. Paul can say only 20% of intially replicated tasks is waste. I can say 25% of crunched results is waste. Even 20% is far too much in my opinion.

BTW, it won't bother me if you choose to move this thread to a different group. I put it in number crunching because it does pertain to number crunching but it fits equally well in the cafe too. If you choose to delete/hide the thread I will respect that decision too. I think that if people want to voice their dislike for this subject or the content of the thread then they should express that concern in a private message to you or in a separate thread rather than clutter this thread with well intentioned but off-topic remarks.
8) Message boards : Number crunching : LHC@home gives BOINC a bad name (Message 59324)
Posted 5 Feb 2009 by Dagorath
Post:
You call that logic and deduction? That's just pulling numbers out of a hat until you get the ones that multiply/divide in a way that makes some nonsense appear to be true. Really, Buck, if the scientists wanted 4 results they would just specify a quorum of 4 instead of relying on chance to give an average of 4. If there is some reason they can't have precisely 4 and need the number to vary above and below an aaverage of 4 then do tell us why. Or convince the scientists to drop by and tell us your theory is true.

I'm not going to go through the verbal exercise of explaining how shortening the deadline achieves quorum faster. Everyone else understands and accepts it and for you to challenge that basic concept just proves you're grabbing desperately at straws.

Anyway, if whatever they're crunching is not essential, your words not mine, then there is no need to waste precious resources getting results back quickly. Which means they don't need to go the extra distance of incorporating the proposal to issue resends to fast reliable hosts. In other words they could have everything they have now simply by shortening the deadline, configuring as proposed to spread the work around more and reducing the initial replication which ensures that EVERY host is working on a task that is needed instead of tasks that are not needed. Without the issue resends to fast reliable hosts policy, it doesn't take a lot of time to configure.
9) Message boards : Number crunching : LHC@home gives BOINC a bad name (Message 59310)
Posted 5 Feb 2009 by Dagorath
Post:
I did not stifle discussion, I simply made a rebuttal to your assertion.


Whatever. I won't nit pick the small stuff because in the end you'll attempt to stifle the discussion again anyway and prove that point for me.

You assert that LHC "wastes" contributions. Fine. That is your opinion. BUt because you believe it, does not make it so ... some people believe that the Earth is flat too ... but their beliefs do not make it so either.


It is not only my opinion it is fact. Anybody can test my opinion and see that it is fact by looking through the results on LHC@home's website and counting all the results that get crunched after the quorum of 3 has been met. You have done exactly that yourself and in a thread on LHC you stated in plain English that roughly 25% of the returned results are redundant results that did not need to be crunched.

There are many reasons to contribute to a project, or to not contribute to a project. If you do not agree with a project, then by all means do not contribute to that project. When I had troubles running Rosetta I stopped contributing here ... when they fixed the problems I started contributing again ... but the net gain to the BOINC world if LHC changed their practices would be so insignificant that there would be no way to measure it.


25% of results is 25% of results no matter how much you try to downplay it. Whatever amount of CPU time that translates into is debatable but the users themselves should decide whether it's insgnificant. One thing is for sure... the size of LHC's batches are growing and if the batches continue to grow the waste will become significant in the future, if it isn't already.

And for someone that argues that LHC is your dream project, well, when are you going to contribute?


When they stop wasting 25% of my donation I'll contribute some CPU cycles. For now I will contribute by lobbying them to end the waste and telling them how they can do it. If you don't like my contribution then at least admit that it isn't hurting you and just ignore my posts because a lot of other people need to hear the facts regarding how much of their donation is being wasted.

The damage to the collider did cost money and it is one of the canards that you continue to tout that because there was money to do this for the LHC itself that there is money to do something for the LHC@Home project. Again, nothing could be further from the truth, and you have been told this and, refuse to accept this fact.


It is not a fact. It is simply an arbitrary budget forecast decision based on the fact that with the aid of spin doctors and censoring of contrary opinions and embarassing facts, they've been able fool enough suckers into donating precious resources that they in turn throw away. I intend to enlighten the public as to what's really going on at LHC@home. When the public stops contributing CPU cycles, CERN and the LHC will scoop money out of whatever slush fund they find convenient and they'll rub that money on the problem and make it go away. If tyhey don't dig up the money (and it's only a few thousand dollars according to the LHC admins) they'll have to mothball the collider. They won't mothball it, they'll fork over the money or else provide manpower in lieu of money. Simple as that.

The money to build, operate, and maintain the LHC does not support the LHC@Home project which gets exactly zero funding from the LHC project ... it is a purely back door project that does do useful work, but is neither indespensable to the LHC project itself nor integral to the success of LHC ...


Sounds like you're saying none of the work LHC@home does is necessary to the collider, just "useful". But to whom is it useful? What purpose does it serve? Is it there just so some IT guy can state on his resume that he ran a BOINC project? Then why not just shut it down completely?

Actually I have read *ALL* of your proposals. And, in the LHC@Home forums discussed them and pointed out the logic flaws in them and the points you keep ignoring. Facts are inconvenient things ... you can pretend they don't exist, but that does not make them go away.


You're absolutely right so why don't you start telling us WHY my proposal won't work instead of just telling us it won't.

Changing the issue and quorum may do what you say. It may not. We don't know. What we do know, and *I* know from personal experience (because I was there) we did try quite a few different mechanisms to obtain the results that the project desired.


But they have never tried my proposals and I know because I've been there for the past 2 years and the server software required to implement my proposals didn't get installed until very recently. Ergo they could not have tried my proposals.

So, yes the new server software has new features...

The project LHC@Home has no staff ... no money ...

To change the server software would take time and effort ... then that install has to be tested and proved... then the new plan has to be tried to see if it works ... which it may do what you say it will ... then again ... it may not ... which is one of those inconvenient facts that you love to ignore.


I've never ignored that and you saying I have is just further proof that you have never bothered to read what I've said. I have stated, on more than 1 occassion and in direct response to your posts, that my proposals need to be tested, debugged and tweaked. I have also stated that there is no better time to do that than now when the collider itself is offline for repairs.

As far as the "no money" spin, that's been debunked already, several times.


If it does not work that investment of time and effort is all pure waste.


It will work and we know it will work because so many other projects have made it work and simple logic and common sense tells us it will work.

The adage "If it is not broken, don't fix it comes to mind."


But it is broke and it does need fixing :)

The only person that seems to think that there is a world ending crisis is you sir. Face it, if they changed the issue and quorum at LHC the world's energy crisis would neither end tomorrow nor would the price of electricity drop ...


That is very obvious and very lame distortion of what I've said. But that's the only recourse you have... to put words in my mouth. I have said it would conserve electricity and put more CPU cycles to good use rather than wasting them on tasks that don't need to be crunched. You seem to have something against conservation and efficiency but I don't know why.

Another issue is that by changing the deadlines to a shorter span means that more people that want to contribute to LHC@Home would be forced out of the project because they could not meet the shorter deadlines. Personnally it is not an issue for me as my machines are fast enough that it would not matter. But it would matter to many ...


Well, what is LHC@home's purpose anyway? Are they there to do some science efficiently or to make work for computers? Do they owe work to crunchers? If a cruncher can't work with a shorter deadline then he can easily find a project with deadlines he CAN work with.

Then you argue that the the current policy to spread the work around is too restrictive but that you have a better way ... well ... so what ... the whole point of your policy change is not to save energy or anything else ... it is to make it so those that have faster systems can, ahem, "hog" the available work ...


Nope you really don't underestand what I've been saying. Or else you're just spinning the facts again on purpose. By restricting hosts to 2 tasks in progress per core and configuring a 15 minute callback deferral, anybody and everybody will get as much work as they can handle and nobody will be able to cache a huge number of tasks that they won't be able to crunch for a several days. More hosts will be applied to the job and the results will be returned faster. With the current restrictions, a host who can crunch 100 tasks in a day isn't allowed to which retards completion of the batch.

Actually to quote my analysis, there is one task per result on average that is, as you assert "wasted" ... one per set of five is 20%, not 25% ...


That's also 1 task in 4 returned which is 25% of returned tasks. Anyway, 20% is still far too much.

And the "old" data was from December 2008 ...

And that analysis had nothing to do with the "failure" of tasks, it was how many tasks on average were made part of the quorum. And, that analysis of several hundred tasks showed that the average quorum was 4 ...

Which is another one of those inconvienient facts.


That much is fact and I won't deny it but the nonsense in your next paragraph below is just nonsense you are attempting to spin into fact. It doesn't fly.

The definition of the quorum is a lower bound. As it is stated a "minimum quorum of 3" that does not preclude as you assert that a higher quorum is not desirable or actually the target. In that we do not have definitive word from the scientists we can only speculate. But I hardly see them as chortling over wasting computing resources needlessly.


Well if they want a quorum of 4 then why don't they just set a quorum of 4? Hmmm? Why do they rely on chance to give them an average of 4? Again your "argument" crumbles. No, Mr. Buck, they need only 3 but they replicate 5 because that was the only way to get the results back quickly years ago and they see no need to change that wasteful policy as long as suckers like you keep ignoring the waste.


Again, we don't know everything that is part of the behind the scenes science or the imperitives that drive it. I, for one, may ask questions about the logic but do not then assert that there must be something nefarious to the replies if they do not suit my desires.

As far as deleting the information ... we, no, not really ... if you are interested you can find it if you like ... yes the thread was hidden ... just like the ones where you were abusive to the project administrator ... yes I found those as well ...

And here we have the key... if you don't get agreement, then the person that does not agree with you is lying and insulting and spinning ... but you never see the alternative, which is that you might be wrong ... and yes I used the word rant, because you present an extreme opinion of another project and claim the mantle of knowing how to solve a non-issue and then, well, I will let other decide ...


It's obvious you cannot separate the issues in your mind and know what is provable through simple logic and deduction and what is just conjecture. Therefore you throw everything out and stick with the status quo because it's safe. That's a classic symptom of someone suffering from depression, a disease you admit you have.

And the only reason I made the first, and now the second rebuttal is that you present your opinion as fact and the truth is that it is not as simple as you say it is


Some parts are EXACTLY as factual as I say they are. And many other experienced and knowledgeable people agree. Other parts are not so simple. But if people learn what's happening and what the alternatives are they might just say "enough is enough" and force LHC to do the right thing and eliminate the waste. And if they don't then oh well at least I tried to do what is right.
10) Message boards : Number crunching : LHC@home gives BOINC a bad name (Message 59292)
Posted 4 Feb 2009 by Dagorath
Post:
If reading the truth bothers you, Mr. Buck then by all means don't read it. But there is no need to stifle discussion here or in any other forum. The discussion is importyant to Rosetta and all BOINC projects because LHC@ home needlessly wastes CPU time that would other wise go to other projects.

LHC found $14 million to repair recent damage to the collider. There is money to spare. The only reason they don't repair their broken BOINC project is because users keep donating the CPU time for free. The reason they keep donating time for free is because they don't realize how much of their donation is wasted. They don't realize how much of their donation is wasted because each and every time the matter comers up for discussion LHC@home project admins delete all the embarassing info.

Paul D. Buck has never even read the proposals I have made because he thinks there is just one way... the old way. He is totally unaware of any of the new server side features and functions that exist today and are installed on LHC@home's servers. It's not rocket science. It's simple common sense...

1) shorten the deadline to 3 or 4 days (current deadline is 7)
2) restrict hosts to 2 tasks in progress per core to spread the work around to more hosts without restricting hosts that are willing to crunch many tasks per day (current policy restricts hosts to 16 tasks per day which is unnecessarily restrictive)
3) set initial replication to 3 and issue resends to fast reliable hosts with a deadline of 1 day

Paul D. Buck bases his assertion on old data which showed a task failure rate of about 30%. The current rate is less than 3% but he refuses to acknowledge that fact even though it come directly from the project's lead developer and can be confirmed simply by looking at a decent number of returned results in the web logs. Thaty's the task failure rate not the rate of waste.

Mr. Buck himself confirmed my own estimate of 25% waste which happens to agree with estimates by other project devs. Now he chooses to deny his own words. Fortunately for P.D. Buck, the LHC admins conveniently deleted all that embarassing info.

I started discussing the matter over a year ago in polite tones and was met with nothing but lies, insults and spin doctoring, ridicule and derision for even daring to question the scientists, as if they are infallible. Take, for example, Buck's use of the term rant in his last post here. The issue does affect Rosetta and needs to be discussed here because it's costing Rosetta too. And to apply Buck's own brand of logic to the situation... Buck, if you don't like my posts then don't read them. (You'll notice how Buck will get all PO'd when you expect him to actually apply his own logic, like he's above everything and everyone.)
11) Message boards : Number crunching : LHC@home gives BOINC a bad name (Message 59289)
Posted 4 Feb 2009 by Dagorath
Post:
It's a very sad day for the BOINC community. On the one hand we have projects like Rosetta@home, Einstein@home and ABC@home that are developed and run by skilled, community minded people who have a conscience. They make the effort to keep their science app, work units and policies efficient, thereby reducing wasted effort.

On the other hand we have bad apples like LHC@home that make the whole barrel stink and rot. The LHC@home project knowingly and purposefully tosses 25% of everyone's contribution of CPU time, hardware and electricity into the waste basket. They don't receive any benefit from that 25% waste that they cannot get from a 0 waste policy and practice. The 25% would be acceptable if it could not be avoided but it can be avoided. All the tools and technology exist to eliminate the waste but the project admins refuse to even admit the problem exists. What's even sadder is that they now censor posts in the LHC@home forums that even mention the issue and they promote lies that cover up the waste.

The 25% waste is a direct result of LHC@home's policy of issuing 5 initial replications for a quorum of 3, the 5/3 policy for short. The 5/3 policy was needed and justifiable years ago when BOINC server and client were both rather primitive. Today, however, new features and functionality in BOINC client and server allow a 3/3 policy (initial replication of 3 for a quorum of 3) that would give LHC@home everything they have now plus other benefits they don't have but need.

If you have computers attached to LHC@home, detach them and donate your precious CPU time, hardware and electricity instead to a project run by competent people who need your donation and do what they can to conserve the resources that ALL the BOINC projects need.
12) Message boards : Number crunching : A SINGLE Graphics Card = 1 teraFLOPS (Message 59264)
Posted 3 Feb 2009 by Dagorath
Post:
Continuing the racing car vs. dump truck analogy...

Maybe the job involves both hauling heavy loads as well as getting small loads from point A to point B quickly?

Maybe the work can be split into 2 parts: 1 part that is suited to GPU and 1 part that is suited to CPU. Folding@home was doing something similar to that when they first started using GPU and I think maybe they still are.
13) Message boards : Number crunching : Any plans for a rosetta cuda client (Message 59188)
Posted 30 Jan 2009 by Dagorath
Post:
Chilean wrote:
And what makes it impossible for it to "land" or choose the exact number?


In a nutshell, there aren't enough bits in a computer to represent the fractional portion of every imaginable number. Same applies to the whole number portion.

Remember that in spite of the fact that computers work very fast they have limitations that we humans do not have. When we want to write the number pi, for example, to 100 decimal places we just go ahead and do it because we have the option to use as much space on our paper as we need. If we run out of room on one sheet we just continue on another sheet and staple the 2 together. It isn't that way with computers, they don't have as much "space" as they want on their "paper". Their "paper" is their physical RAM and the registers inside their CPU. RAM and registers consist of a finite number of memory locations. So, when you design a computer, you are forced by finite RAM and finite registers, to limit the fractional portion of numbers to a finite number of bits. By limiting the fractional portion to, for example, 64 bits, you lose the ability to represent all the fractional parts that require 65 or more bits.

OK, but you know that computers have calculated the value of pi to more than 100 decimal places. Well, it turns out that in spite of the finite number of bits in the hardware, programmers can do a few tricks in the software to sort of (but not really) give computers more bits than they actually have. Unfortunately, using those tricks slows down the computation speed and the more "pretend" bits you give them the slower things go.

That's a very quick and simple answer to your question, as simple as I can get without going into long and complicated examples involving binary arithmetic.
14) Message boards : Number crunching : Running in "high priority" with no real reason (Message 59167)
Posted 29 Jan 2009 by Dagorath
Post:
Version 6.4.5's buggy scheduler is likely causing it. Unless you need 6.4.5 to run CUDA apps, consider dropping back to 6.2.19. On the other hand, if your tasks are getting returned on time and there are no other problems then you might consider it going to high priority mode to be just a minor annoyance that doesn't really warrant going back to 6.2.19.
15) Message boards : Number crunching : Having problems with client and compute errors. (Message 59066)
Posted 27 Jan 2009 by Dagorath
Post:
One clue is here:


Unhandled Exception Detected...

- Unhandled Exception Record -
Reason: Access Violation (0xc0000005) at address 0x7730B19B read attempt to address 0x8C5799DC


There was an Access Violation which means the application tried to read an address that was not part of its assigned address space. Programs are not allowed to read addresses that are not in their assigned memory space.

Access Violations are often caused by pointers gone awry. Faulty memory, OC'd or faulty CPU or programming errors can cause pointers to go awry and point to an address outside of the program's assigned address space. The error report itself doesn't give enough info to favor any one of those possible causes over the others. Experience with OC, however, tells us that if you're OCing and getting Access Violations then the first thing you should do is drop back to stock settings.

16) Message boards : Number crunching : Having problems with client and compute errors. (Message 59064)
Posted 27 Jan 2009 by Dagorath
Post:
When you change 2 things at the same time (for example the OC and the RAID) you make it difficult to discover the cause of the problem via process of elimination. Leave the RAID as it is and don't change ANYTHING else but the OC. Put the clocks and voltages back to stock. Ignore compute errors on any tasks that started crunching when it was OC'd for they may have been tainted by the OC before they errored. Let it run on stock settings for several days not just several hours. If you still get compute errors on BOINC tasks then leave it at stock speeds and tweak other things one at a time until it runs right. It could be the RAID, BOINC settings, dirty power supply or 100 other things but don't make the mistake of thinking that if it stills gives errors at stock speeds then the problem can't be OC. When you eventually get it running right at stock speed for several weeks then try a conservative OC for at least a week. If that works then try a little more OC.

Prime95 is only 1 test and lots of people think it's not as rigorous/difficult a test as many BOINC projects are. Passing prime95 doesn't mean it will pass other tests.

17) Message boards : Number crunching : SETI infected by Rosetta? (Message 59003)
Posted 23 Jan 2009 by Dagorath
Post:
Mikey, thanks for volunteering to be the crashtest dummy :)

It sounds like Dotsch/UX created a RAM disk then BOINC attempted to download the Rosetta files to it but it was too small.

I haven't tried Dotsch/UX yet but it sounds to me like you need a 1 or 2 GB USB memory stick (a thumb drive) plugged into a USB port if you want to configure Dotscg/UX to be "diskless". I gather there are install options that will install the OS and BOINC to the thumb drive so that you can boot from the thumb drive rather than the CD. After boot, the thumb drive is used to store BOINC's data directory. At least that's what I gather from looking at the info at Dotsch/UX website but again... I've never actually tried it.

P.S. As for the project list being empty, its sounds like Dotsch forgot to include an all_projects.xml file. That's the file that stores the names and URLs of the projects in the project list. BOINC client updates all_projects.xml about once a week so eventually you would get one.
18) Message boards : Number crunching : SETI infected by Rosetta? (Message 58983)
Posted 22 Jan 2009 by Dagorath
Post:
Just from instinct, the hard disk on this sys would be suspect #1 - if so, maybe a CDROM based linux would be a cool way to go.

Generally, do those install most of what they need to run in memory, or do they have to go to the CD drive a lot? (the CD drive on this thing is as old as the rest of it, so it wouldn't be good to beat on it)


The trouble is BOINC and the project applications running under BOINC need to save data to the disk. If your hard disk is indeed toast and you remove it then all you have left is the CD ROM. Since it's ROM, BOINC can't save data to it. But no sense worrying about that until you run chkdsk and/or other diagnostics on your HD and see if it's toast.

If your HD is bad then watch your local newspapers for people giving away an old computer for free. Or ask friends. You'll be amazed at how many people have old computers sitting around just waiting to get tossed out or recycled. Scavenge an HD from one of those, test it, install Linux on it if it's good.

Another alternative is to install Linux on a USB memory stick. Dotsch recently released Dotsch/UX, a Linux + BOINC combo that installs everything you would need including BOINC. The only trouble I can forsee is that USB memory sticks might need USB 2.0 but the computer might have only USB 1.2 which may not work. Or it might work but just not as fast as it would with USB 2.0. For more informations have a look at http://www.dotsch.de/Dotsch_UX.

19) Message boards : Number crunching : Boinc Manager 6.4.5 problem (Message 58981)
Posted 22 Jan 2009 by Dagorath
Post:
All told, great results thanks to the support I've had here. Every suggestion has helped in practical terms or in my understanding. Excellent.


You're welcome. Your clear and concise writing style makes it easy to help you :)

I don't know if you've found your way to any of the BOINC documentation yet so I'll leave you with a few links, see the links in my sig below. The BOINC FAQ is an excellent place to start looking for solutions to problems you might run into from time to time. It also lists most of the error messages you can run into along the way. It explains what the error messages mean and how to fix the problem they point to. The wiki does a pretty good job of explaining how BOINC works, from the basics to more advanced topics, how to install and configure it, etc. Both resources are worth bookmarking in your browser.
20) Message boards : Number crunching : Boinc Manager 6.4.5 problem (Message 58972)
Posted 21 Jan 2009 by Dagorath
Post:
I run Vista64 (not my best choice overall) but that allowed me to have 8Gb RAM (obviously good) so lack of RAM isn't a consideration, I'd hope.


Let's do the math. You're running only Rosetta at the moment. 60% of 8 GB will give Rosetta 4.8 GB. If you're running a quad-core then you can have up to 8 Rosetta apps in memory if your CPU has hyper-threading. 4.8 divided by 8 allows 600 MB per task. I don't know exactly how much RAM Rosetta tasks need but I doubt it's more than 100 MB.

Just for comparison, tasks from Superlink project need up to 1 GB apiece. 8 of those would eat all your RAM. The point is, when you add projects in the future, inquire at the project about RAM requirements or watch the task in Task Manager to see what it needs for RAM. Sometimes they need only 100 MB for 90% of the run time but have spikes up to 1 GB or more for brief periods.


Next 20



©2024 University of Washington
https://www.bakerlab.org