Could GPU's mean the end of DC as we know it?

Message boards : Number crunching : Could GPU's mean the end of DC as we know it?

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Ethan
Volunteer moderator

Send message
Joined: 22 Aug 05
Posts: 286
Credit: 9,304,700
RAC: 0
Message 42625 - Posted: 25 Jun 2007, 20:56:27 UTC
Last modified: 25 Jun 2007, 20:58:11 UTC

I doubt additional CPU power will go unused anytime soon :) Current calculations are designed to be as 'simple' as possible in order to run in a decent amount of time. In many cases assumptions have to be made or the calculation would be millions or billions of times larger. An example from my college days. .

We simulated the interactions between stars researching galaxy formation. Since each star gravitationally interacted with every other star, there were ~N^2 calculations where N is the number of stars in the simulation. I think we were able to get away with several thousand and get results in a couple days, but real galaxies have hundreds of billions of stars (100,000,000 calculations per unit of time vs ~10^20). What your time slices are have an impact as well, do you calculate forces on objects over a minute, day, year, century at a time? When galaxies take many millions of years to form the time slice is yet another simplification that needs to be made. The same is true of things that happen very quickly only in reverse.

With the exception of looking for primes or star dust in gel, I can't think of any other DC projects that work with without making trade-offs on the accuracy of the results in order to get results in a reasonable amount of time. That’s why the help of everyone participating in R@H is so useful, it allows the project to get better results in a shorter period of time.

ID: 42625 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 42642 - Posted: 26 Jun 2007, 10:43:58 UTC

Parallel processing machine 100X faster than current PCs

Researchers at the University of Maryland have come up with a desktop parallel computing system they say is 100 times faster than current PCs and the kicker is, they want you to name it.

That's right, researchers are inviting the public to propose names for the prototype that they say uses a circuit board about the size of a license plate on which they have mounted 64 parallel processors.

To control those processors, they have developed the crucial parallel computer organization that allows the processors to work together and make programming practical and simple for software developers said Uzi Vishkin and the University of Maryland James Clark School of Engineering researchers who developed the machine.

Parallel processing on a massive scale, based on interconnecting numerous chips, has been used for years to create supercomputers. However, its application to desktop systems has been a challenge because of severe programming complexities.

The Clark School team found a way to use single chip parallel processing technology to change that.

Vishkin presented his computer last week at Microsoft's Workshop on Many-Core Computing.....
ID: 42642 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile AgnosticPope

Send message
Joined: 16 Dec 05
Posts: 18
Credit: 148,821
RAC: 0
Message 42745 - Posted: 28 Jun 2007, 1:17:08 UTC - in response to Message 42619.  

The problem at the moment is getting enough people interested and "in the know" about it.


I work at a "major corporation" that has "more than 50,000 employees," each of whom has a fairly substantial laptop or desktop computer. By corporate policy we are all absolutely prohibited from running "third party" applications on our computers.

So, if anybody has any good contacts with the Boards of Directors of any good Fortune 500 companies, most of whom would be in a similar situation, but most of whom encourage employees to undertake "charity work" using some company supplied resources, how about getting said Boards of Directors to allow or even encourage employees to use BOINC on their work computers?

If you want more "bang for your buck" of time spent advocating for some cause, that would be the path I would recommend.

== Bill

ID: 42745 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
GeneM

Send message
Joined: 4 Aug 06
Posts: 7
Credit: 1,112,726
RAC: 0
Message 42778 - Posted: 28 Jun 2007, 18:01:46 UTC - in response to Message 42745.  

The problem at the moment is getting enough people interested and "in the know" about it.


I work at a "major corporation" that has "more than 50,000 employees," each of whom has a fairly substantial laptop or desktop computer. By corporate policy we are all absolutely prohibited from running "third party" applications on our computers.

So, if anybody has any good contacts with the Boards of Directors of any good Fortune 500 companies, most of whom would be in a similar situation, but most of whom encourage employees to undertake "charity work" using some company supplied resources, how about getting said Boards of Directors to allow or even encourage employees to use BOINC on their work computers?

If you want more "bang for your buck" of time spent advocating for some cause, that would be the path I would recommend.

== Bill




If memory serves me I believe IBM has for years encouraged its employees to connect their company computers to one of the projects on the World Computing Grid.
ID: 42778 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5664
Credit: 5,711,666
RAC: 233
Message 42786 - Posted: 28 Jun 2007, 19:06:09 UTC - in response to Message 42778.  

The problem at the moment is getting enough people interested and "in the know" about it.


I work at a "major corporation" that has "more than 50,000 employees," each of whom has a fairly substantial laptop or desktop computer. By corporate policy we are all absolutely prohibited from running "third party" applications on our computers.

So, if anybody has any good contacts with the Boards of Directors of any good Fortune 500 companies, most of whom would be in a similar situation, but most of whom encourage employees to undertake "charity work" using some company supplied resources, how about getting said Boards of Directors to allow or even encourage employees to use BOINC on their work computers?

If you want more "bang for your buck" of time spent advocating for some cause, that would be the path I would recommend.

== Bill




If memory serves me I believe IBM has for years encouraged its employees to connect their company computers to one of the projects on the World Computing Grid.


If we could only get microsoft to get online, not to mention alot of the other high tech companies in Seattle/Bellevue area.

ID: 42786 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
FoldingSolutions
Avatar

Send message
Joined: 2 Apr 06
Posts: 129
Credit: 3,506,690
RAC: 0
Message 42787 - Posted: 28 Jun 2007, 19:30:59 UTC - in response to Message 42786.  

Surely supercomputers such as the IBM bluegene must have downtime when they're not doing anything, couldn't they be used for DC (having overcome programming difficulties of course. Think of the RAC on that :o
ID: 42787 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5664
Credit: 5,711,666
RAC: 233
Message 42793 - Posted: 28 Jun 2007, 21:08:32 UTC - in response to Message 42787.  
Last modified: 28 Jun 2007, 21:29:04 UTC

Surely supercomputers such as the IBM bluegene must have downtime when they're not doing anything, couldn't they be used for DC (having overcome programming difficulties of course. Think of the RAC on that :o


oh help! that just blows my mind thinking about it. gees...how many work units could you run simultaneously on that machine? and the number of decoys made on it in our standard 4-8 hour runtime?!

bluegene (280 TFLOPS, with current config of 65,536 "Compute Nodes" (i.e., 216 nodes) vs cray's baddest baby(xt4 with 320 cabinets with 96 cores per cabinet and processing speed 1Tflop per cabinet, so 320Tflops in total)with RAH...hows that add up? thats 600Tflops of compute power between them with maximum configuration. And Cray uses my favorite chip company AMD Opteron 64-bit.
But can BOINC/RAH be adapted to work with fortan and c++?


ID: 42793 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile AgnosticPope

Send message
Joined: 16 Dec 05
Posts: 18
Credit: 148,821
RAC: 0
Message 42799 - Posted: 29 Jun 2007, 0:48:15 UTC

Teraflops are so yesterday. The new upper limit is 3 petaflops, according to this BBC article:

By comparison the standard one petaflop Blue Gene/P comes with 294,912-processors connected by a high-speed, optical network.

However, it can be expanded to pack 884,736 processors, a configuration that would allow the machine to compute 3,000 trillion calculations per second (three petaflops).

...

IBM is also currently building a bespoke supercomputer for the DOE's Los Alamos National Laboratory, New Mexico.

Codenamed Roadrunner, it will be able to crunch through 1.6 thousand trillion calculations per second.

The computer will contain 16,000 standard processors working alongside 16,000 "cell" processors, designed for the PlayStation 3 (PS3).


The real question is whether BOINC software could be adapted to snuggle into arrays of cell processors. The Roadrunner architecture seems promising as the BOINC part could run in the standard processors so that only the computationally intensive code would need to be cellularized.

== Bill
ID: 42799 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile AgnosticPope

Send message
Joined: 16 Dec 05
Posts: 18
Credit: 148,821
RAC: 0
Message 42802 - Posted: 29 Jun 2007, 0:57:56 UTC

Speaking of petaflops, any of these teraflop+ machines would make a large dent in the Rosetta processing as the home page says this: "TeraFLOPS estimate: 51.857"

So, all of us folks are only managing to contribute a measely 52 teraflops, more or less. So, do you think one of those supercomputer owners would run BOINC for us in the computer's spare time?

== Bill
ID: 42802 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Greg_BE
Avatar

Send message
Joined: 30 May 06
Posts: 5664
Credit: 5,711,666
RAC: 233
Message 42811 - Posted: 29 Jun 2007, 9:03:52 UTC - in response to Message 42802.  

Speaking of petaflops, any of these teraflop+ machines would make a large dent in the Rosetta processing as the home page says this: "TeraFLOPS estimate: 51.857"

So, all of us folks are only managing to contribute a measely 52 teraflops, more or less. So, do you think one of those supercomputer owners would run BOINC for us in the computer's spare time?

== Bill


Heck, get Dr Baker to ask them. lol
Cray is in Seattle so he could just drive over to them from the campus and have a chat with them and call up some of the clients of Cray. I machine would clean up in one cycle what it takes us, what, a month or more to create the same amount of data?
ID: 42811 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Previous · 1 · 2

Message boards : Number crunching : Could GPU's mean the end of DC as we know it?



©2024 University of Washington
https://www.bakerlab.org