Message boards : Number crunching : GPU
Previous · 1 · 2
Author | Message |
---|---|
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,523,781 RAC: 8,309 |
http://en.wikipedia.org/wiki/CUDA#Limitations Why not OpenCL?? http://boinc.fzk.de/poem/forum_thread.php?id=384 |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
The bottleneck mentioned is physically in the hardware. OpenCL is not hardware and so does nothing to relieve the bottleneck. Rosetta Moderator: Mod.Sense |
mikey Send message Joined: 5 Jan 06 Posts: 1895 Credit: 9,132,201 RAC: 5,104 |
The problem seem to be that the Rosetta calculations do not fit into a gpu's memory, meaning there would be a TON of swapping back and forth negating any advantage of using the gpu in the first place. Alot by whose counting, 1 gb of memory is not alot by cpu standards, but yes it is alot by gpu standards. Although since not everyone has that much memory on their gpu's the app must be tailored to fit many gpu's so it must be sized to fit in as little as 256 mb of memory, the standard for most gpu cards of just a few years ago. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,523,781 RAC: 8,309 |
The bottleneck mentioned is physically in the hardware. Bottleneck between processor and memory? Is this the solution?? http://sites.amd.com/us/fusion/apu/Pages/fusion.aspx |
mikey Send message Joined: 5 Jan 06 Posts: 1895 Credit: 9,132,201 RAC: 5,104 |
The bottleneck mentioned is physically in the hardware. Sounds like a way to make us buy new stuff because our current stuff won't work with this! Yes it could be faster but only if they design it that way, if they just stick a gpu on the same die as a cpu, they are not making it better just smaller. Right now one of the problems in pc's is the heat generated by the cpu and gpu, if you stick all that heat in one place you must design better air flow too!! |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Right, getting two components closer together minimizes the time for them to interoperate, but you still have to essentially page fault everything to the GPU for it to process. So, GPU memory is still the primary constraint. Rosetta Moderator: Mod.Sense |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,523,781 RAC: 8,309 |
Right, getting two components closer together minimizes the time for them to interoperate, but you still have to essentially page fault everything to the GPU for it to process. So, GPU memory is still the primary constraint. Yeap, but in AMD (and INTEL) roadmap this is only the first step to a deeper integration between cpu and gpu components... |
Message boards :
Number crunching :
GPU
©2024 University of Washington
https://www.bakerlab.org