OpenCL

Message boards : Number crunching : OpenCL

To post messages, you must log in.

AuthorMessage
sgaboinc

Send message
Joined: 2 Apr 14
Posts: 282
Credit: 208,966
RAC: 0
Message 91246 - Posted: 11 Oct 2019, 13:09:12 UTC
Last modified: 11 Oct 2019, 13:16:43 UTC

would rosetta ever do it ? :o lol
but i'd guess some volunteers would have issue with the power budget, gpus are extremely power hungry, some run in excess of 200 W
but for sure if single precision is all that is needed and that is it possible to vectorize the compute, all the Nvidia and AMD higher range gpus easily delivers 2-20 Tflops of computation power, if say an average of 5 Tflops, it just take 200 hosts to reach a summed 1 petaflops of compute power
lol
ID: 91246 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1864
Credit: 8,185,649
RAC: 7,167
Message 91248 - Posted: 11 Oct 2019, 13:16:49 UTC - in response to Message 91246.  

Vexata quaestio.
I continue to update this thread, but it's like an hobby
ID: 91248 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
sgaboinc

Send message
Joined: 2 Apr 14
Posts: 282
Credit: 208,966
RAC: 0
Message 91249 - Posted: 11 Oct 2019, 13:21:22 UTC
Last modified: 11 Oct 2019, 13:21:48 UTC

i'd think we'd need to wait, as GPUs are certainly expensive in terms of its cost and its very high power consumption.
but for sure for vectorized compute they deliver supercomputer performances on desktops and servers
ID: 91249 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1864
Credit: 8,185,649
RAC: 7,167
Message 91250 - Posted: 11 Oct 2019, 15:40:48 UTC - in response to Message 91249.  
Last modified: 11 Oct 2019, 15:42:03 UTC

i'd think we'd need to wait

Oh, well, the first post in R@H about OpenCl was at beginning of 2009, over 10 ys ago. The developers, 5 or 6ys ago, tried to put R@H on a gpu, but only with a small improvement. During these years a lot of thing has changed so, i don't know if they will try again in their lab.
R@H uses a lot of eterogenous simulation on a very different proteins, so it's very difficult to port all these complexity on gpu.
I think there is a simpler way: create a "specialized" app that does only one kind of work (for example, "ab initio") and start with an OpenCl C++ app on cpu. When this app will be stable and debugged, try to port it on gpu.
Are they interested? Have they the knowledge to do that?
I don't know.
ID: 91250 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
sgaboinc

Send message
Joined: 2 Apr 14
Posts: 282
Credit: 208,966
RAC: 0
Message 91251 - Posted: 11 Oct 2019, 16:06:24 UTC - in response to Message 91250.  
Last modified: 11 Oct 2019, 16:46:47 UTC

well but literally, i'm not too sure if boinc have any constraints, but if not
my guess is that there can be different apps for different purposes. i doubt all kinds of apps can benefit from a gpu.
but there will be some that do!
i'd think the hard part again is the server to do just that as they'd need to associate different jobs with different binaries

and for that matter, they can have a monolithic version of rosetta with opencl extensions, which only for those volunteers who wants to run tasks on their gpu, they would use those binaries.

my guess is for volunteers wishing to crunch on gpu, the motivation is partly that the credits accumulation would need to be much higher or that the jobs are much shorter. that as a decent gpu easily consume 150-200 watts or more power. that is easily 2-5 times more power hungry than crunching on cpu with 8-16 concurrent threads

using gpu would also have some rather troublesome driver dependencies and the volunteers would need to set it up appropriately so that the gpu can be linked / loaded at run time
but gpu isn't new, folding at home has been there done that
https://foldingathome.org/

but nevertheless with moores law breaking down, my suspicion is that OpenCL may literally become the *next big thing*, i.e. OpenCL will become so pervasive that it is *expected* for any compute intensive jobs.

actually this is a *bad thing*, as power consumption increase dramatically and cpu become linearly or worse exponentially more expensive for those with high core counts (this is exactly what is happening today). but yes OpenCL vector compute gives you the Tflops to Pflops supercomputing power that is not achievable trying to scale transistors down any further. at the cost of much much higher power consumption

tutorials about OpenCL abound on the internet, and it is very much a subset of C language with restrictions.
https://handsonopencl.github.io/
back then and even today GPUs are expensive hardware but that things have improved considerably especially with the breakneck speed of features creeping into the newer OpenCL and OpenGL versions

the other thing is i'm not sure if engineers and scientists at intel, amd etc can push moores law further up scale further down and run CPUs at say 0.1 v peak, if you can run a cpu at 0.1 v and you drive 100 amps into it it is a mere 10 watts
ID: 91251 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1864
Credit: 8,185,649
RAC: 7,167
Message 91258 - Posted: 12 Oct 2019, 17:42:11 UTC - in response to Message 91251.  

i'd think the hard part again is the server to do just that as they'd need to associate different jobs with different binaries

I don't think so. With new versions of Boinc Server, you can manage different apps very easily.

they can have a monolithic version of rosetta with opencl extensions, which only for those volunteers who wants to run tasks on their gpu, they would use those binaries.

I repeat: start with simple, unique, specialized app it's the easy way. After that they can add, if possible, opencl extension to monolithic code.

my guess is for volunteers wishing to crunch on gpu, the motivation is partly that the credits accumulation would need to be much higher or that the jobs are much shorter.

Or, for example, simulate bigger and more complex proteins that cannot simulate with cpu. Faster=more science.

using gpu would also have some rather troublesome driver dependencies and the volunteers would need to set it up appropriately so that the gpu can be linked / loaded at run time

Rosetta volunteers are smarter of what you think. And latest version of boinc client has less problems with gpu than in the past.

back then and even today GPUs are expensive hardware but that things have improved considerably especially with the breakneck speed of features creeping into the newer OpenCL and OpenGL versions

Not very expansive. With 150 euros/dollars you have a gpu with 5 Tflops single precision. Not bad.
ID: 91258 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
sgaboinc

Send message
Joined: 2 Apr 14
Posts: 282
Credit: 208,966
RAC: 0
Message 91259 - Posted: 12 Oct 2019, 18:18:26 UTC - in response to Message 91258.  
Last modified: 12 Oct 2019, 18:21:25 UTC



back then and even today GPUs are expensive hardware but that things have improved considerably especially with the breakneck speed of features creeping into the newer OpenCL and OpenGL versions

Not very expansive. With 150 euros/dollars you have a gpu with 5 Tflops single precision. Not bad.


true, there are lots of used high end gpus dumped in the market due to the bitcoin fallout, it is a blessing in disguise sort of.
but energy costs aren't low, high end gpus normally consume > 150 watts. i'd consider it if say i'm able to use things like solar to supplement the power, but otherwise it is costly burning fossil fuel.
the other consideration would be if crunching time can be much shorter
ID: 91259 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1864
Credit: 8,185,649
RAC: 7,167
Message 91264 - Posted: 14 Oct 2019, 7:10:06 UTC - in response to Message 91259.  
Last modified: 14 Oct 2019, 7:10:15 UTC

but energy costs aren't low, high end gpus normally consume > 150 watts. i'd consider it if say i'm able to use things like solar to supplement the power, but otherwise it is costly burning fossil fuel.

But you can crunch also with mid-end gpus, that consume = 150 watts or less.
For example, the incoming AMD RX5500 seems ok for this kind of work.
But we are writing about dreams.
There is NO SSEx support and NO 64 bit native support for Windows, let alone gpu.
ID: 91264 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1864
Credit: 8,185,649
RAC: 7,167
Message 91272 - Posted: 15 Oct 2019, 15:13:12 UTC - in response to Message 91250.  

I think there is a simpler way: create a "specialized" app that does only one kind of work (for example, "ab initio") and start with an OpenCl C++ app on cpu.

A lot of steps are done in this way
PoCl 1.4
Clang support for OpenCl C++
etc
ID: 91272 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1864
Credit: 8,185,649
RAC: 7,167
Message 91379 - Posted: 17 Nov 2019, 15:01:42 UTC - in response to Message 91272.  

Some days ago, KronosGroup released Sycl 1.2.1 Release 6, with a lot of improvements (like Tensorflow support, CUDA back-end, etc) and bugfixes.
This is great to write C++ code for GPU.
ID: 91379 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : OpenCL



©2024 University of Washington
https://www.bakerlab.org