Peakstream launches GPGPU for Windows

Message boards : Number crunching : Peakstream launches GPGPU for Windows

To post messages, you must log in.

AuthorMessage
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 38506 - Posted: 27 Mar 2007, 23:19:09 UTC
Last modified: 27 Mar 2007, 23:30:13 UTC

Peakstream launches GPGPU for Windows

Free beta to play with.

As of now, the Peakstream tools run only on ATI hardware, but there are rumblings about an NV version in the pipeline.

GPGPU code remains a fairly niche application in any case, the code will either be very applicable to what you do and run amazingly fast or not do what you want at all. There is fairly little middle ground here.
ID: 38506 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 38734 - Posted: 30 Mar 2007, 13:21:17 UTC - in response to Message 38506.  
Last modified: 30 Mar 2007, 13:22:22 UTC

Nvidia F@H client MIA

"Fast forward to GPGPU...

The most damning bit is the Folding@Home client, or lack thereof. I am told by coders, cuda devs and even ATIers that the F@H clients are almost trivial to write. ATI did it, every architecture under the sun has an optimized client, and it will even run on the PS3.

The fact that it didn't happen tells us there is an Achilles heel to the architecture, or it isn't nearly as general purpose as ATI's last gen parts. With the release of R600 in a month or so, one has to wonder if the concept of GPGPUs will be forgotten about at NV altogether.

All this together points something seriously wrong with the NV architecture."

ID: 38734 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
BennyRop

Send message
Joined: 17 Dec 05
Posts: 555
Credit: 140,800
RAC: 0
Message 38759 - Posted: 31 Mar 2007, 3:11:16 UTC

With the length of time it took Mhouston over at F@H to get the client to the state they did on the nvidia chipsets, and then give up and get to work with the ATI chipsets - any reduction in the time to code the app would be a benefit. It'd be interesting to see the Rosetta@Home client experimented with to see what can be run on a GPU, if that didn't detract from the crucial work being done on the cpu client.

Moving to different formats (GPU, PS3, Xbox, etc), optimizing the client for AMD&Intel, and coming up with new approaches that will improve the accuracy while reducing the workload (twice as accurate with half the cpu time) will all improve the efficiency of the project. Our helping attract and retain new crunchers will help improve the speed of the project, as well. :) Here's hoping for more staff to help with the client upgrades/ports/changes.
ID: 38759 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 38943 - Posted: 3 Apr 2007, 21:12:14 UTC - in response to Message 38759.  

GPGPU vs. CPU will be the war of 2008-9

"GENERAL-PURPOSE COMPUTATION ON GPUS (GPGPU) will be the buzzword of 2008 and pretty much everybody in the industry knows it.

We have repeatedly heard Intel downplaying the importance of Floating-Point units but those guys were only doing their job. In terms of raw performance, a GPU of today eats even Cell CPUs, yet alone Clovertown or similar multi-core CPUs.

The reason for the downplay is simple: the next-generation of professional cards from both Nvidia and AMD are such floating-point monsters that no CPU will be able to compete with the G80 and R600 cards. AMD is touting the phrase Stream Computing, while Nvidia is talking about GPU Computing - but the bottom line is that both are the same: using GPU for CPU-style computation.

We should point out that the GPU is still far from being a replacement for the CPU. GPU chips can do insanely well in many maths-intensive, applications, but ones that do not rely on random factor. For any prediction needed, CPUs have massive amounts of cache that can store predictable and non-predictable instructions. GPU does not have room to miss a cycle."

ID: 38943 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 39032 - Posted: 5 Apr 2007, 3:56:32 UTC - in response to Message 38943.  

Nvidians to launch dedicated GPGPU brand

"NVIDIA PLANS TO to launch a dedicated brand for its GPGPU business, according to investor Bible The Street.

What that brand is, we just don't know. DAAMIT is already pushing the 'Stream Computing' angle, but Nvidia is nothing if not a marketing company, and it is sure to come up with something catchy.

Given that the average margin in the Professional business is 60%, and the fact that Nvidia doesn't have to spend too much R&D money on the GPU Computing products (since they'll just be remarketed GeForce products) the earnings potential is lucrative - handy, at a time when the company is facing the bizarrely placed consumer market, with allies enemies and enemies allies. It's amazing what cash in the bank can do for you - just ask 3DFX."


ID: 39032 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile (_KoDAk_)

Send message
Joined: 18 Jul 06
Posts: 109
Credit: 1,859,263
RAC: 0
Message 39116 - Posted: 7 Apr 2007, 12:11:21 UTC

Have YOU plan (If YES -when? ) Use GeForceā„¢ 8x00 in Rosetta?
ID: 39116 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paydirt
Avatar

Send message
Joined: 10 Aug 06
Posts: 127
Credit: 960,607
RAC: 0
Message 39194 - Posted: 9 Apr 2007, 17:08:37 UTC

The simplicity of nVidia's shader units may be an obstacle. Supposedly, their shaders are only 1 or 2 dimensional while the ATI shaders are four dimensional. What this means is that they can crunch 4-dimensional vector math (I think?). The shaders are where the GPU crunching gets done, so maybe it is partially to do with the limitations of the nVidia shaders.

nVidia shaders are "unified" which helps with gaming, I guess, but I don't think it does much for crunching.

Maybe nVidia's hardware is inferior for crunching?


Regardless, I'd love to see crunching code for the G80 for most any project. It'd be a big boost. ATI R600 crunching should rock too with 2.5x the number of shaders of the x1950xtx. So I'm excited.
ID: 39194 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile (_KoDAk_)

Send message
Joined: 18 Jul 06
Posts: 109
Credit: 1,859,263
RAC: 0
Message 39202 - Posted: 9 Apr 2007, 21:11:00 UTC

I hope CUDA GPU Computing will better then ATI...
ID: 39202 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 39215 - Posted: 10 Apr 2007, 2:26:15 UTC - in response to Message 39202.  
Last modified: 10 Apr 2007, 2:26:41 UTC

Nvidia's beta compiler dumps maths to GPU

GRAPHICS CARD OUTFIT Nvidia has released the beta versions of the SDK and C compiler for their Compute Unified Device Architecture (CUDA) technology.

The beta will allow developers to dump the chip's maths functions on the GPU and pen code that use the GPU better.

According to Arstechnica, the idea is a bit different from ATI's "close to metal" idea which opens up the low-level ISA so that their graphics products can be programmed directly in assembly language.

But the main thing is that the two methods are so different that you will have to opt for one or the other. This means that both ATI and Nvidia will be locking developers into their hardware.

The big idea appears to be that if punters are forced to use their standards then there will be no room when Intel enters the market.
ID: 39215 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paydirt
Avatar

Send message
Joined: 10 Aug 06
Posts: 127
Credit: 960,607
RAC: 0
Message 39230 - Posted: 10 Apr 2007, 15:22:11 UTC - in response to Message 39215.  

Folding At Home runs the code the direct way (ATI). FAH's main coder says it would take months to code FAH in CUDA... with no guarantee of success. Other CUDA developers have run into snags.

ID: 39230 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
|MatMan|

Send message
Joined: 10 Oct 05
Posts: 3
Credit: 1,602,625
RAC: 0
Message 39923 - Posted: 26 Apr 2007, 21:58:37 UTC - in response to Message 39230.  

The simplicity of nVidia's shader units may be an obstacle. Supposedly, their shaders are only 1 or 2 dimensional while the ATI shaders are four dimensional. What this means is that they can crunch 4-dimensional vector math (I think?). The shaders are where the GPU crunching gets done, so maybe it is partially to do with the limitations of the nVidia shaders.

sorry but this is simply nonsense. It is actually an advantage, that nVidias shaders are scalar (1-dimensional) because it doesn't matter how many dimensions the data has. If you have 4-dimensional shader hardware but your data has only 3 dimension you have the remaining dimension unused -> you waste 25% of your processing power!

nVidia shaders are "unified" which helps with gaming, I guess, but I don't think it does much for crunching.

Unified shaders are an advantage too. On non-unified hardware only pixel shaders are used for GPGPU computing while the vertex shaders are sitting idle...

Maybe nVidia's hardware is inferior for crunching?

I really don't think so.

Folding At Home runs the code the direct way (ATI). FAH's main coder says it would take months to code FAH in CUDA... with no guarantee of success. Other CUDA developers have run into snags.

Folding At Home DOES NOT run the code the direct way (ATI). They use standard DX9 code like any DX9 game. FAH does not run on nVidia cards due to driver problems, e.g. the FAH shader programs are much much longer and more complex than any game. But the drivers are made and tested for games as they were the only application for 3D gfx cards until now. Unfortunately because of Vista and the quite new G80 architecture the nVidia driver team has a lot of other things to do than fix the driver for FAH. At least they are in contact with the FAH developers so maybe one day we'll see FAH on nVidia cards as well...

ID: 39923 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mimo

Send message
Joined: 29 Apr 07
Posts: 2
Credit: 52,764
RAC: 0
Message 40034 - Posted: 29 Apr 2007, 10:19:20 UTC
Last modified: 29 Apr 2007, 10:24:13 UTC

i am working on seti at home gpu client (GLSL version-runnable on ATI&nvidia) . so where is a source code for rosetta
i can see in if is some way to port to the gpu ...
ID: 40034 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1829
Credit: 115,297,484
RAC: 48,390
Message 40037 - Posted: 29 Apr 2007, 10:32:40 UTC - in response to Message 40034.  

i am working on seti at home gpu client (GLSL version-runnable on ATI&nvidia) . so where is a source code for rosetta
i can see in if is some way to port to the gpu ...

Hi mimo

The source isn't available directly for download, but it is open-source so the devs will send it to you if you request. The mod's email is rosettamod( at )gmail.com.

HTH and good luck!
Danny
ID: 40037 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mimo

Send message
Joined: 29 Apr 07
Posts: 2
Credit: 52,764
RAC: 0
Message 40038 - Posted: 29 Apr 2007, 11:29:49 UTC

dcdc thanx
ID: 40038 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile bruce boytler
Avatar

Send message
Joined: 17 Sep 05
Posts: 68
Credit: 3,565,442
RAC: 0
Message 40103 - Posted: 30 Apr 2007, 20:00:48 UTC

Hi All,

Was this an interesting thread! First off the CPU although slow is extremely flexible and accurate, makes few mistakes. So in DC computing it can crunch anything.

The CELLBE is of course very fast compared to the PC CPU. But is less flexible in what it can do. AT F@H the cell can only do explicit solvation calculations whereas the PCCPU can do both explicit and implicit. This is just one example.

The GPU computing is even less flexible than the cell. The GPU runs extremely fast but makes alot of mistakes. The speed is so quick that even with correcting mistakes it is way faster than cellbe. The main problem is that it can only be used for a narrow range of calculation.

The NVIDA GPU was abandoneed at F@H because it does not have the correct shader archecture to render a fold any bettter than a CPU, as does most other GPU's. At this point ATI 19xx series with its 48 shaders has the ability to run a narrow range of folding workunits and very fast.

But what the ATI can do is very, very exciting and really pushing the science forward.

I tried to keep this in laymans terms tough topic to comment on. Hope this helps with any one contemplating using a GPU to crunch.
ID: 40103 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 41384 - Posted: 24 May 2007, 12:00:09 UTC

Nvidia claims G92 will be a 1 Teraflop beast

The actual power of G92 might surprise some. The 8800 can rustle up about 330Gflops, which means the green team is suggesting that the 9800 could be three times more powerful. If that's true, the question remains - why bother?

Nvidia Graphics Chips Perform Double Duty as CPUs

The GeForce GPU, for example, can act as a co-processor to the CPU, has its own 16K-bit memory and runs more than 128,000 instruction threads in parallel, he said. Groups of threads can also work together to accomplish one task.

Nvidia's approach differs from the one touted by Advanced Micro Devices Inc. Tuesday at Microprocessor Forum. AMD, which acquired the graphics chip company ATI Technologies Inc. in 2006, is in the early stages of development of a combined CPU/GPU called Fusion, which is due some time in 2009. Fusion is offered more as a cost play, Nickolls said, in order to reduce cost by combining the GPU and CPU, rather than to boost performance.

ID: 41384 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 41421 - Posted: 25 May 2007, 10:42:01 UTC - in response to Message 41384.  

Nvidia CUDA arriving shortly

Interestingly, green team spinners revealed that the next update to the GeForce line, amusingly dubbed the 9800, will include double-precision floating point arithmetic, a fact sure to please mathematicians and high-performance computing geeks everywhere.

Andy Keane, the man in charge of GPGPU at Nvidia, said that Intel had been describing CUDA v Terascale as "a war" between CPU and GPU. He didn't, however, mention how his side was likely to win.

The usual roll-out of early success stories was in tow, with Massachusetts General Hospital claiming a 100x improvement in digital tomosynthesis performance.
ID: 41421 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : Peakstream launches GPGPU for Windows



©2024 University of Washington
https://www.bakerlab.org