| Author | Message |
Sid Celery
Send message
Joined: 11 Feb 08 Posts: 2538 Credit: 47,093,569 RAC: 11,751
|
It's quite amazing to me that the more detail I go into the more doesn't get read.
In the section I've highlighted I'm pointing to the fact that batches of tasks are very limited in number and don't last very long.
Very quickly there aren't any more 4hr tasks for anyone to move onto.
It'd save everyone a great amount of time simply by taking the point being made and not work to contrive some other bizarre eventuality when it might not apply, as if that makes any difference to anything.
It doesn't.
And in the section before that you’re doing maths that does not make sense.
Regardless of how many 4 hour tasks there are, for the period in which they are running they are as efficient as the 8 hour tasks - they are computing at the same flops and, as soon as the last decoy has completed the task releases it’s resources. Your maths assumes that the resources are held for the full 4 hours which is wrong.
For the period in which they are running - yes. Which is half the time they're scheduled to run. You're making my point for me. Thanks.
My maths doesn't assume resources are held for any amount of time. The shortfall in Rosetta processing is continually released back.
And half (and more) of the processing potential of each Rosetta batch is lost until the batch is used up. And we have no Rosetta tasks to process for longer.
Perhaps someone who wants to process Rosetta tasks can tell me the benefit of that.
Over 24hrs, 4hr tasks would complete 12 decoys, 8hr tasks 15 decoys and 12hr tasks 16 decoys.
You are stating that 4 hour tasks complete 2 decoys, then state that over 24 hours they would complete 12. This is 6 tasks in 24 hours equals 4 hours per task but when the task detects that it cannot complete a third task after ?3? hours it releases the resources and therefore you will complete 24/3 tasks not 24/4 which gives you 16 completed decoys per 24 hours.
This really is an extraordinary waste of time and effort to make a point that doesn't end up going anywhere, but if you insist on going there, let's go there
A 4hr task that can't complete 3 decoys will be taking 2.67hrs or more to complete 2 decoys.
Let's call it between 2.7hrs and 4hrs, failing to utilise between 0 and 1.3hrs of originally planned runtime.
Let's call that an average of 0.65hrs per task for argument's sake.
That 0.65hr average applies to 4hr, 8hr and 12hr tasks equally.
So, over 24hrs:
you'll get 24/3.35 = 7.16 nominal 4hr tasks at 2 decoys each = 14.33
you'll get 24/7.35 = 3.26 nominal 8hr tasks at 5 decoys each = 16.33
you'll get 24/11.35 = 2.11 nominal 12hr tasks at 8 decoys each = 16.92
And a batch of a million tasks will last 3.35m hrs, 7.35m hrs and 11.35m hrs respectively
Does it make a difference? Yes. 14.33, 16.33 & 16.92 are not the same as my original estimate of 12, 15 & 16
Does it make a difference that negates the point I'm making? No
Could you modify the assumptions I used to reverse the point I'm making? Also no
I mean, maybe this has been a valuable exercise to make me even more than 100% certain my suggestion for a 20 second one-off adjustment to people's settings is something everyone should do.
It's certainly achieved that from my pov.
But I'm also even more certain some people still won't make the change, because of how much delight people in general take in not doing beneficial things, even when there's literally no downside.
I have a phrase for this sort of thing, but it's wise if I don't write it down as people tend to take rather a lot of offence, however unequivocably true it might be.
|
|
Grant (SSSF)
Send message
Joined: 28 Mar 20 Posts: 1925 Credit: 18,534,891 RAC: 0
|
But I'm also even more certain some people still won't make the change, because of how much delight people in general take in not doing beneficial things, even when there's literally no downside. And i'll point it out again- that is only the case if Rosetta is your primary project, where you are trying to keep your system doing Rosetta work as much as possible, even when there is little if any to be had. And any other project is just a backup so the system is doing something even if there is no Rosetta work.
For everyone else, the default Runtimes are best- the project gets as much work done per Task as it needs, and people's systems are able to do work for their other projects sooner.
If they get Rosetta work, they get some. If they don't, then they don't.
Grant
Darwin NT
|
|
Bryn Mawr
Send message
Joined: 26 Dec 18 Posts: 440 Credit: 15,189,162 RAC: 4,691
|
It's quite amazing to me that the more detail I go into the more doesn't get read.
In the section I've highlighted I'm pointing to the fact that batches of tasks are very limited in number and don't last very long.
Very quickly there aren't any more 4hr tasks for anyone to move onto.
It'd save everyone a great amount of time simply by taking the point being made and not work to contrive some other bizarre eventuality when it might not apply, as if that makes any difference to anything.
It doesn't.
And in the section before that you’re doing maths that does not make sense.
Regardless of how many 4 hour tasks there are, for the period in which they are running they are as efficient as the 8 hour tasks - they are computing at the same flops and, as soon as the last decoy has completed the task releases it’s resources. Your maths assumes that the resources are held for the full 4 hours which is wrong.
For the period in which they are running - yes. Which is half the time they're scheduled to run. You're making my point for me. Thanks.
My maths doesn't assume resources are held for any amount of time. The shortfall in Rosetta processing is continually released back.
And half (and more) of the processing potential of each Rosetta batch is lost until the batch is used up. And we have no Rosetta tasks to process for longer.
Perhaps someone who wants to process Rosetta tasks can tell me the benefit of that.
Over 24hrs, 4hr tasks would complete 12 decoys, 8hr tasks 15 decoys and 12hr tasks 16 decoys.
You are stating that 4 hour tasks complete 2 decoys, then state that over 24 hours they would complete 12. This is 6 tasks in 24 hours equals 4 hours per task but when the task detects that it cannot complete a third task after ?3? hours it releases the resources and therefore you will complete 24/3 tasks not 24/4 which gives you 16 completed decoys per 24 hours.
This really is an extraordinary waste of time and effort to make a point that doesn't end up going anywhere, but if you insist on going there, let's go there
A 4hr task that can't complete 3 decoys will be taking 2.67hrs or more to complete 2 decoys.
Let's call it between 2.7hrs and 4hrs, failing to utilise between 0 and 1.3hrs of originally planned runtime.
Let's call that an average of 0.65hrs per task for argument's sake.
That 0.65hr average applies to 4hr, 8hr and 12hr tasks equally.
So, over 24hrs:
you'll get 24/3.35 = 7.16 nominal 4hr tasks at 2 decoys each = 14.33
you'll get 24/7.35 = 3.26 nominal 8hr tasks at 5 decoys each = 16.33
you'll get 24/11.35 = 2.11 nominal 12hr tasks at 8 decoys each = 16.92
And a batch of a million tasks will last 3.35m hrs, 7.35m hrs and 11.35m hrs respectively
Does it make a difference? Yes. 14.33, 16.33 & 16.92 are not the same as my original estimate of 12, 15 & 16
Does it make a difference that negates the point I'm making? No
Could you modify the assumptions I used to reverse the point I'm making? Also no
I mean, maybe this has been a valuable exercise to make me even more than 100% certain my suggestion for a 20 second one-off adjustment to people's settings is something everyone should do.
It's certainly achieved that from my pov.
But I'm also even more certain some people still won't make the change, because of how much delight people in general take in not doing beneficial things, even when there's literally no downside.
I have a phrase for this sort of thing, but it's wise if I don't write it down as people tend to take rather a lot of offence, however unequivocably true it might be.
I’ve highlighted the error in your logic, it’s just plain wrong.
If we work with your 0.65 hours the task “fails to utilise” (because it’s utilised by the next task) then each task is taking 1.675 hours on your machine. Therefore an 8 hour task will complete 4 tasks with a “wastage” of 1.3 hours and a 12 hour task will complete 7 tasks with a “wastage” of 0.275 hours.
Thus the figures are actually :-
over 24hrs:
you'll get 24/3.35 = 7.16 nominal 4hr tasks at 2 decoys each = 14.33
you'll get 24/6.7 = 3.58 nominal 8hr tasks at 4 decoys each = 14.33
you'll get 24/11.725 = 2.05 nominal 12hr tasks at 8 decoys each = 14.33
|
|
Sid Celery
Send message
Joined: 11 Feb 08 Posts: 2538 Credit: 47,093,569 RAC: 11,751
|
But I'm also even more certain some people still won't make the change, because of how much delight people in general take in not doing beneficial things, even when there's literally no downside. And I'll point it out again - that is only the case if Rosetta is your primary project, where you are trying to keep your system doing Rosetta work as much as possible, even when there is little if any to be had. And any other project is just a backup so the system is doing something even if there is no Rosetta work.
For everyone else, the default Runtimes are best- the project gets as much work done per Task as it needs, and people's systems are able to do work for their other projects sooner.
If they get Rosetta work, they get some. If they don't, then they don't.
tl;dr1 lol
tl;dr2 You already said this and I got it the first time
tl;dr3 "pardon me for assuming people in the Rosetta forums expect to run Rosetta"
I haven't written this in any other forums, where your point might be more reasonable and I would expect to be told to get lost. I've written it here.
I'm not suggesting people should set their tasks to have 12hr runtimes, like I do, because as I've shown, and I was already aware, the credit-benefit/hr is marginal even if the overall runtime (& credit) benefit is significant.
Irrespective of whether Rosetta is someone's primary project or not, Boinc scheduling gets screwed up by the 4hr Rosetta Beta runtime not matching Boinc's forced 8hr assumption.
And correcting that is a benefit to the scheduling of all the projects anyone runs, isn't it. You don't mention that, as if it isn't one, but it is, isn't it.
And, as we're all only too aware, Rosetta has long periods of almost no work. Out of the last 60 days I note I've only received credit on 15 days - 25% of the time. 75% of the time, nothing. At other times this year, much less.
So it's not 'only applicable if Rosetta is your primary project'. It would be more realistic to say people should only retain the 4hr Rosetta Beta runtime if Rosetta is a 0 Resource Share project for them and only want to run it minimally (if at all)
I can't believe how much I've ended up writing to explain something that was intuitively obvious from the outset, but I don't regret doing so in response to claims that are patently untrue, as they are.
|
|
Sid Celery
Send message
Joined: 11 Feb 08 Posts: 2538 Credit: 47,093,569 RAC: 11,751
|
It's quite amazing to me that the more detail I go into the more doesn't get read.
In the section I've highlighted I'm pointing to the fact that batches of tasks are very limited in number and don't last very long.
Very quickly there aren't any more 4hr tasks for anyone to move onto.
It'd save everyone a great amount of time simply by taking the point being made and not work to contrive some other bizarre eventuality when it might not apply, as if that makes any difference to anything.
It doesn't.
And in the section before that you’re doing maths that does not make sense.
Regardless of how many 4 hour tasks there are, for the period in which they are running they are as efficient as the 8 hour tasks - they are computing at the same flops and, as soon as the last decoy has completed the task releases it’s resources. Your maths assumes that the resources are held for the full 4 hours which is wrong.
For the period in which they are running - yes. Which is half the time they're scheduled to run. You're making my point for me. Thanks.
My maths doesn't assume resources are held for any amount of time. The shortfall in Rosetta processing is continually released back.
And half (and more) of the processing potential of each Rosetta batch is lost until the batch is used up. And we have no Rosetta tasks to process for longer.
Perhaps someone who wants to process Rosetta tasks can tell me the benefit of that.
Over 24hrs, 4hr tasks would complete 12 decoys, 8hr tasks 15 decoys and 12hr tasks 16 decoys.
You are stating that 4 hour tasks complete 2 decoys, then state that over 24 hours they would complete 12. This is 6 tasks in 24 hours equals 4 hours per task but when the task detects that it cannot complete a third task after ?3? hours it releases the resources and therefore you will complete 24/3 tasks not 24/4 which gives you 16 completed decoys per 24 hours.
This really is an extraordinary waste of time and effort to make a point that doesn't end up going anywhere, but if you insist on going there, let's go there
A 4hr task that can't complete 3 decoys will be taking 2.67hrs or more to complete 2 decoys.
Let's call it between 2.7hrs and 4hrs, failing to utilise between 0 and 1.3hrs of originally planned runtime.
Let's call that an average of 0.65hrs per task for argument's sake.
That 0.65hr average applies to 4hr, 8hr and 12hr tasks equally.
So, over 24hrs:
you'll get 24/3.35 = 7.16 nominal 4hr tasks at 2 decoys each = 14.33
you'll get 24/7.35 = 3.26 nominal 8hr tasks at 5 decoys each = 16.33
you'll get 24/11.35 = 2.11 nominal 12hr tasks at 8 decoys each = 16.92
And a batch of a million tasks will last 3.35m hrs, 7.35m hrs and 11.35m hrs respectively
Does it make a difference? Yes. 14.33, 16.33 & 16.92 are not the same as my original estimate of 12, 15 & 16
Does it make a difference that negates the point I'm making? No
Could you modify the assumptions I used to reverse the point I'm making? Also no
I mean, maybe this has been a valuable exercise to make me even more than 100% certain my suggestion for a 20 second one-off adjustment to people's settings is something everyone should do.
It's certainly achieved that from my pov.
But I'm also even more certain some people still won't make the change, because of how much delight people in general take in not doing beneficial things, even when there's literally no downside.
I have a phrase for this sort of thing, but it's wise if I don't write it down as people tend to take rather a lot of offence, however unequivocably true it might be.
I’ve highlighted the error in your logic, it’s just plain wrong.
If we work with your 0.65 hours the task “fails to utilise” (because it’s utilised by the next task) then each task is taking 1.675 hours on your machine. Therefore an 8 hour task will complete 4 tasks with a “wastage” of 1.3 hours and a 12 hour task will complete 7 tasks with a “wastage” of 0.275 hours.
Thus the figures are actually :-
over 24hrs:
you'll get 24/3.35 = 7.16 nominal 4hr tasks at 2 decoys each = 14.33
you'll get 24/6.7 = 3.58 nominal 8hr tasks at 4 decoys each = 14.33
you'll get 24/11.725 = 2.05 nominal 12hr tasks at 8 decoys each = 14.33
tl;dr lol
It would be nice if that were true - partly because, with the 12hr tasks, 2.05 x 8 is 16.4 decoys/24hrs
I've just looked back at a representative number of my completed nominal 12hr tasks
As it turns out, I lose on average 0.55hrs per task - average 11.45hrs, which is less than your 11.725hrs estimate but not far off
On the plus side, I complete 7, 8, 9, 10 & 12 decoys, averaging 8.47, which is more than our estimate of 8
The product of which is 24/11.45 = 2.096 * 8.47 = 17.75/24hrs - more than your 16.4 or my 16.92 - I'm pleasantly surprised to learn
I'm not going to go back to it, but when I looked at someone's nominal 4hr tasks they all seemed to only complete 2 decoys - much less scope for variability within that short target runtime, I'm guessing
Unfortunately, I don't have an example of someone using nominal 8hr runtime tasks to quantify, but I really don't think it's reasonable to limit it to 4 decoys. More scope for variability of runtimes over 8hrs than 4hrs, though not as much as with 12hrs. If someone has an example of an 8hr runtime host by all means point to it.
The point being, averages are averages. They're not hard and fast every time. Even if we only used a figure of 4.2 decoys/8hr task we'd be looking at over 15 decoys/24hrs
I'm not interested in talking about (or even thinking about) your version of the sums or mine tbh - intuitively longer runtimes will result in greater productivity per unit time and vastly more in absolute terms, so it's a benefit whichever way anyone wants to look at it, except - in deference to Grant's comical postings - you don't want to run Rosetta much or at all or if you want our small batches of tasks to run out quickly or you'd like Boinc's scheduling of all projects to get messed up for a period of time. None of which is helpful to anyone in any way.
My only disappointment in this discussion is I started off being 100% sure this tiny tweak would be beneficial to everyone in every way, and I can still only be 100% sure now. Somehow I feel cheated.
|
|
Bryn Mawr
Send message
Joined: 26 Dec 18 Posts: 440 Credit: 15,189,162 RAC: 4,691
|
It's quite amazing to me that the more detail I go into the more doesn't get read.
In the section I've highlighted I'm pointing to the fact that batches of tasks are very limited in number and don't last very long.
Very quickly there aren't any more 4hr tasks for anyone to move onto.
It'd save everyone a great amount of time simply by taking the point being made and not work to contrive some other bizarre eventuality when it might not apply, as if that makes any difference to anything.
It doesn't.
And in the section before that you’re doing maths that does not make sense.
Regardless of how many 4 hour tasks there are, for the period in which they are running they are as efficient as the 8 hour tasks - they are computing at the same flops and, as soon as the last decoy has completed the task releases it’s resources. Your maths assumes that the resources are held for the full 4 hours which is wrong.
For the period in which they are running - yes. Which is half the time they're scheduled to run. You're making my point for me. Thanks.
My maths doesn't assume resources are held for any amount of time. The shortfall in Rosetta processing is continually released back.
And half (and more) of the processing potential of each Rosetta batch is lost until the batch is used up. And we have no Rosetta tasks to process for longer.
Perhaps someone who wants to process Rosetta tasks can tell me the benefit of that.
Over 24hrs, 4hr tasks would complete 12 decoys, 8hr tasks 15 decoys and 12hr tasks 16 decoys.
You are stating that 4 hour tasks complete 2 decoys, then state that over 24 hours they would complete 12. This is 6 tasks in 24 hours equals 4 hours per task but when the task detects that it cannot complete a third task after ?3? hours it releases the resources and therefore you will complete 24/3 tasks not 24/4 which gives you 16 completed decoys per 24 hours.
This really is an extraordinary waste of time and effort to make a point that doesn't end up going anywhere, but if you insist on going there, let's go there
A 4hr task that can't complete 3 decoys will be taking 2.67hrs or more to complete 2 decoys.
Let's call it between 2.7hrs and 4hrs, failing to utilise between 0 and 1.3hrs of originally planned runtime.
Let's call that an average of 0.65hrs per task for argument's sake.
That 0.65hr average applies to 4hr, 8hr and 12hr tasks equally.
So, over 24hrs:
you'll get 24/3.35 = 7.16 nominal 4hr tasks at 2 decoys each = 14.33
you'll get 24/7.35 = 3.26 nominal 8hr tasks at 5 decoys each = 16.33
you'll get 24/11.35 = 2.11 nominal 12hr tasks at 8 decoys each = 16.92
And a batch of a million tasks will last 3.35m hrs, 7.35m hrs and 11.35m hrs respectively
Does it make a difference? Yes. 14.33, 16.33 & 16.92 are not the same as my original estimate of 12, 15 & 16
Does it make a difference that negates the point I'm making? No
Could you modify the assumptions I used to reverse the point I'm making? Also no
I mean, maybe this has been a valuable exercise to make me even more than 100% certain my suggestion for a 20 second one-off adjustment to people's settings is something everyone should do.
It's certainly achieved that from my pov.
But I'm also even more certain some people still won't make the change, because of how much delight people in general take in not doing beneficial things, even when there's literally no downside.
I have a phrase for this sort of thing, but it's wise if I don't write it down as people tend to take rather a lot of offence, however unequivocably true it might be.
I’ve highlighted the error in your logic, it’s just plain wrong.
If we work with your 0.65 hours the task “fails to utilise” (because it’s utilised by the next task) then each task is taking 1.675 hours on your machine. Therefore an 8 hour task will complete 4 tasks with a “wastage” of 1.3 hours and a 12 hour task will complete 7 tasks with a “wastage” of 0.275 hours.
Thus the figures are actually :-
over 24hrs:
you'll get 24/3.35 = 7.16 nominal 4hr tasks at 2 decoys each = 14.33
you'll get 24/6.7 = 3.58 nominal 8hr tasks at 4 decoys each = 14.33
you'll get 24/11.725 = 2.05 nominal 12hr tasks at 8 decoys each = 14.33
tl;dr lol
It would be nice if that were true - partly because, with the 12hr tasks, 2.05 x 8 is 16.4 decoys/24hrs
I've just looked back at a representative number of my completed nominal 12hr tasks
As it turns out, I lose on average 0.55hrs per task - average 11.45hrs, which is less than your 11.725hrs estimate but not far off
On the plus side, I complete 7, 8, 9, 10 & 12 decoys, averaging 8.47, which is more than our estimate of 8
The product of which is 24/11.45 = 2.096 * 8.47 = 17.75/24hrs - more than your 16.4 or my 16.92 - I'm pleasantly surprised to learn
I'm not going to go back to it, but when I looked at someone's nominal 4hr tasks they all seemed to only complete 2 decoys - much less scope for variability within that short target runtime, I'm guessing
Unfortunately, I don't have an example of someone using nominal 8hr runtime tasks to quantify, but I really don't think it's reasonable to limit it to 4 decoys. More scope for variability of runtimes over 8hrs than 4hrs, though not as much as with 12hrs. If someone has an example of an 8hr runtime host by all means point to it.
The point being, averages are averages. They're not hard and fast every time. Even if we only used a figure of 4.2 decoys/8hr task we'd be looking at over 15 decoys/24hrs
I'm not interested in talking about (or even thinking about) your version of the sums or mine tbh - intuitively longer runtimes will result in greater productivity per unit time and vastly more in absolute terms, so it's a benefit whichever way anyone wants to look at it, except - in deference to Grant's comical postings - you don't want to run Rosetta much or at all or if you want our small batches of tasks to run out quickly or you'd like Boinc's scheduling of all projects to get messed up for a period of time. None of which is helpful to anyone in any way.
My only disappointment in this discussion is I started off being 100% sure this tiny tweak would be beneficial to everyone in every way, and I can still only be 100% sure now. Somehow I feel cheated.
With apologies for the typo, the 12 hour deadline only allows 7 decoys, not 8 :-
At 1.675 hours per decoy you have :-
1=1,675
2=3.35
3=5.025
4=6.7
5=8.375
6=10.05
7=11.725
8=13.4
It is actually very easy to prove that the work done must be the same regardless of the time limit - changing the limit does not change the machine’s performance, no change to the CPU’s clock speed, the memory’s speed or bandwidth, the bus speed or any other metric affecting performance.
I seriously do not care whether people run 1 hour tasks or 36 hour tasks, that is their decision and you can suggest they change it to you heart’s content but you are trying to use erroneous maths to prove something that is not true and that I cannot accept.
It is perfectly acceptable to argue that one 12 hour task is quicker than three 4 hour tasks - by twice the time it takes to end one task and start the next, which is low digit seconds. It is also perfectly valid to argue that in an environment where tasks are intermittent, it is better to run longer tasks to use the empty time and to provide more results to the researchers but it is totally false to argue that more work is done in unit time by changing the time limit per task.
As for the exact figure you achieve with your specific PC, I was using your figure in the first place - the exact figure used makes no difference, the maths is the same. A quicker PC will complete more decoys but this will affect the amount of time given to the next task and the results will balance out - intuition has nothing to do with it, the maths is the only determinant..
BTW, attacking the poster is not helping your argument - calling Grant’s posting comical is just weakening any valid comments you make.
|
|
Sid Celery
Send message
Joined: 11 Feb 08 Posts: 2538 Credit: 47,093,569 RAC: 11,751
|
It's quite amazing to me that the more detail I go into the more doesn't get read.
In the section I've highlighted I'm pointing to the fact that batches of tasks are very limited in number and don't last very long.
Very quickly there aren't any more 4hr tasks for anyone to move onto.
It'd save everyone a great amount of time simply by taking the point being made and not work to contrive some other bizarre eventuality when it might not apply, as if that makes any difference to anything.
It doesn't.
And in the section before that you’re doing maths that does not make sense.
Regardless of how many 4 hour tasks there are, for the period in which they are running they are as efficient as the 8 hour tasks - they are computing at the same flops and, as soon as the last decoy has completed the task releases it’s resources. Your maths assumes that the resources are held for the full 4 hours which is wrong.
For the period in which they are running - yes. Which is half the time they're scheduled to run. You're making my point for me. Thanks.
My maths doesn't assume resources are held for any amount of time. The shortfall in Rosetta processing is continually released back.
And half (and more) of the processing potential of each Rosetta batch is lost until the batch is used up. And we have no Rosetta tasks to process for longer.
Perhaps someone who wants to process Rosetta tasks can tell me the benefit of that.
Over 24hrs, 4hr tasks would complete 12 decoys, 8hr tasks 15 decoys and 12hr tasks 16 decoys.
You are stating that 4 hour tasks complete 2 decoys, then state that over 24 hours they would complete 12. This is 6 tasks in 24 hours equals 4 hours per task but when the task detects that it cannot complete a third task after ?3? hours it releases the resources and therefore you will complete 24/3 tasks not 24/4 which gives you 16 completed decoys per 24 hours.
This really is an extraordinary waste of time and effort to make a point that doesn't end up going anywhere, but if you insist on going there, let's go there
A 4hr task that can't complete 3 decoys will be taking 2.67hrs or more to complete 2 decoys.
Let's call it between 2.7hrs and 4hrs, failing to utilise between 0 and 1.3hrs of originally planned runtime.
Let's call that an average of 0.65hrs per task for argument's sake.
That 0.65hr average applies to 4hr, 8hr and 12hr tasks equally.
So, over 24hrs:
you'll get 24/3.35 = 7.16 nominal 4hr tasks at 2 decoys each = 14.33
you'll get 24/7.35 = 3.26 nominal 8hr tasks at 5 decoys each = 16.33
you'll get 24/11.35 = 2.11 nominal 12hr tasks at 8 decoys each = 16.92
And a batch of a million tasks will last 3.35m hrs, 7.35m hrs and 11.35m hrs respectively
Does it make a difference? Yes. 14.33, 16.33 & 16.92 are not the same as my original estimate of 12, 15 & 16
Does it make a difference that negates the point I'm making? No
Could you modify the assumptions I used to reverse the point I'm making? Also no
I mean, maybe this has been a valuable exercise to make me even more than 100% certain my suggestion for a 20 second one-off adjustment to people's settings is something everyone should do.
It's certainly achieved that from my pov.
But I'm also even more certain some people still won't make the change, because of how much delight people in general take in not doing beneficial things, even when there's literally no downside.
I have a phrase for this sort of thing, but it's wise if I don't write it down as people tend to take rather a lot of offence, however unequivocably true it might be.
I’ve highlighted the error in your logic, it’s just plain wrong.
If we work with your 0.65 hours the task “fails to utilise” (because it’s utilised by the next task) then each task is taking 1.675 hours on your machine. Therefore an 8 hour task will complete 4 tasks with a “wastage” of 1.3 hours and a 12 hour task will complete 7 tasks with a “wastage” of 0.275 hours.
Thus the figures are actually :-
over 24hrs:
you'll get 24/3.35 = 7.16 nominal 4hr tasks at 2 decoys each = 14.33
you'll get 24/6.7 = 3.58 nominal 8hr tasks at 4 decoys each = 14.33
you'll get 24/11.725 = 2.05 nominal 12hr tasks at 8 decoys each = 14.33
tl;dr lol
It would be nice if that were true - partly because, with the 12hr tasks, 2.05 x 8 is 16.4 decoys/24hrs
I've just looked back at a representative number of my completed nominal 12hr tasks
As it turns out, I lose on average 0.55hrs per task - average 11.45hrs, which is less than your 11.725hrs estimate but not far off
On the plus side, I complete 7, 8, 9, 10 & 12 decoys, averaging 8.47, which is more than our estimate of 8
The product of which is 24/11.45 = 2.096 * 8.47 = 17.75/24hrs - more than your 16.4 or my 16.92 - I'm pleasantly surprised to learn
I'm not going to go back to it, but when I looked at someone's nominal 4hr tasks they all seemed to only complete 2 decoys - much less scope for variability within that short target runtime, I'm guessing
Unfortunately, I don't have an example of someone using nominal 8hr runtime tasks to quantify, but I really don't think it's reasonable to limit it to 4 decoys. More scope for variability of runtimes over 8hrs than 4hrs, though not as much as with 12hrs. If someone has an example of an 8hr runtime host by all means point to it.
The point being, averages are averages. They're not hard and fast every time. Even if we only used a figure of 4.2 decoys/8hr task we'd be looking at over 15 decoys/24hrs
I'm not interested in talking about (or even thinking about) your version of the sums or mine tbh - intuitively longer runtimes will result in greater productivity per unit time and vastly more in absolute terms, so it's a benefit whichever way anyone wants to look at it, except - in deference to Grant's comical postings - you don't want to run Rosetta much or at all or if you want our small batches of tasks to run out quickly or you'd like Boinc's scheduling of all projects to get messed up for a period of time. None of which is helpful to anyone in any way.
My only disappointment in this discussion is I started off being 100% sure this tiny tweak would be beneficial to everyone in every way, and I can still only be 100% sure now. Somehow I feel cheated.
With apologies for the typo, the 12 hour deadline only allows 7 decoys, not 8:-
At 1.675 hours per decoy you have :-
1=1,675
2=3.35
3=5.025
4=6.7
5=8.375
6=10.05
7=11.725
8=13.4
It is actually very easy to prove that the work done must be the same regardless of the time limit - changing the limit does not change the machine’s performance, no change to the CPU’s clock speed, the memory’s speed or bandwidth, the bus speed or any other metric affecting performance.
I seriously do not care whether people run 1 hour tasks or 36 hour tasks, that is their decision and you can suggest they change it to you heart’s content but you are trying to use erroneous maths to prove something that is not true and that I cannot accept.
It is perfectly acceptable to argue that one 12 hour task is quicker than three 4 hour tasks - by twice the time it takes to end one task and start the next, which is low digit seconds. It is also perfectly valid to argue that in an environment where tasks are intermittent, it is better to run longer tasks to use the empty time and to provide more results to the researchers but it is totally false to argue that more work is done in unit time by changing the time limit per task.
As for the exact figure you achieve with your specific PC, I was using your figure in the first place - the exact figure used makes no difference, the maths is the same. A quicker PC will complete more decoys but this will affect the amount of time given to the next task and the results will balance out - intuition has nothing to do with it, the maths is the only determinant..
BTW, attacking the poster is not helping your argument - calling Grant’s posting comical is just weakening any valid comments you make.
I picked up that your error was just a typo, but I picked on the wrong typo. Not that it matters.
I think you've got bogged down in the numbers, not the reality of how tasks work.
The average loss comes from the fact 4hr tasks invariably complete 2 decoys within a 4hr target runtime, not 3 or 1, meaning they take 2.67 to 4hrs for the two (which I approximated to 2.7hrs). That's an implied runtime, not a real one.
But averaging 2 decoys isn't very representative of real runtime. Their length can vary a lot, as indicated by my sample of 12hr tasks where I found 7, 8, 9, 10 & 12 decoys completed. I actually found one that managed 24 decoys, but discarded it as an unrepresentative outlier. So, extrapolating a theoretical runtime based on a very short 4hr target doesn't work very well - in fact it works very badly.
The much better way to look at it is to see what happens when you get to the last decoy (of the 4hr or 8hr or 12hr target) and whether it can complete or not.
That's why I said the unutilised part of the target runtime will be the same whatever the overall length. Because they're unlikely to approach the end from the same point.
Averages are only averages.
While 100% of 4hr target runtimes completed 2 decoys, from memory I think 16% of 12hr target runtimes completed 7, 47% completed 8, 26% completed 9, 5% completed 10 and 5% completed 12
That's a very significant difference from the estimates both of us put out, but that's the reality, and it's way more than a rounding error of difference.
While writing this, whether you believe people get the opportunity to complete more than 2 or 3 times the decoys completed in 8 or 12hr runtimes compared to 4hrs, I can't help thinking I've got bogged down in these numbers too.
By far the more significant difference is the doubling/trebling of absolute runtimes when we have so much downtime in the year and most people have probably run out already...
...I say most people. I haven't. I'll probably still be running into the New Year, because... well... see above
And just to mention, I've made no disparaging remarks about Grant, who is usually great here, as are you, but specifically about his postings, as I made a point of detailing, because they're ludicrous.
|
|
Bryn Mawr
Send message
Joined: 26 Dec 18 Posts: 440 Credit: 15,189,162 RAC: 4,691
|
I picked up that your error was just a typo, but I picked on the wrong typo. Not that it matters.
I think you've got bogged down in the numbers, not the reality of how tasks work.
The average loss comes from the fact 4hr tasks invariably complete 2 decoys within a 4hr target runtime, not 3 or 1, meaning they take 2.67 to 4hrs for the two (which I approximated to 2.7hrs). That's an implied runtime, not a real one.
But averaging 2 decoys isn't very representative of real runtime. Their length can vary a lot, as indicated by my sample of 12hr tasks where I found 7, 8, 9, 10 & 12 decoys completed. I actually found one that managed 24 decoys, but discarded it as an unrepresentative outlier. So, extrapolating a theoretical runtime based on a very short 4hr target doesn't work very well - in fact it works very badly.
The much better way to look at it is to see what happens when you get to the last decoy (of the 4hr or 8hr or 12hr target) and whether it can complete or not.
That's why I said the unutilised part of the target runtime will be the same whatever the overall length. Because they're unlikely to approach the end from the same point.
Averages are only averages.
While 100% of 4hr target runtimes completed 2 decoys, from memory I think 16% of 12hr target runtimes completed 7, 47% completed 8, 26% completed 9, 5% completed 10 and 5% completed 12
That's a very significant difference from the estimates both of us put out, but that's the reality, and it's way more than a rounding error of difference.
While writing this, whether you believe people get the opportunity to complete more than 2 or 3 times the decoys completed in 8 or 12hr runtimes compared to 4hrs, I can't help thinking I've got bogged down in these numbers too.
By far the more significant difference is the doubling/trebling of absolute runtimes when we have so much downtime in the year and most people have probably run out already...
...I say most people. I haven't. I'll probably still be running into the New Year, because... well... see above
And just to mention, I've made no disparaging remarks about Grant, who is usually great here, as are you, but specifically about his postings, as I made a point of detailing, because they're ludicrous.
I must admit that I cannot follow any of your logic in this post, especially how you conclude that the unused portion of the target runtime is the same regardless of the overall length but, by all means, let us take numbers out of the discussion and examine how the task behaves when you get to the last decoy.
There are two possible ways that the application works, either it predicts at there start of each decoy whether it has time to complete and, if not, terminates or it waits until the time limit is reached and then terminates, discarding the uncompleted decoy.
It is evident from the varying runtimes of the tasks that the former is the case.
Working with this hypothesis then it is clear that the amount of time discarded will vary across tasks due to the varying complexity of the starting position but that, within a single task, the decoys will each take the same amount of time as they have the same starting position.
Given that this is the case then it is clear that the amount of the target runtime discarded will vary according to where that target is set - move the target and the end of a decoy may be just before or just after the cut off, say you move the target from 4 to 5 hours the third decoy might or might not complete in which case you might have almost no discard or an hour more.
Also, given that no time is being lost, a new task starts as soon as the last decoy completes and the task terminates, and, as I have shown before, the performance of the machine remains constant, it follows that the same amount of work is done.
Q.E.D.
(Sorry about that last, I’ve been wanting to use it in a real conversation since my school days)
|
|