do you mean you can use it as conventional block device on linux
You could you use it - but when you put the data in from outside it would be quite slow.
Only way to achieve proper speeds when reading/writing on memory itself with CU's instead of going back to CPU to do so.
(still correct approach costs like 20-30% of r/w perf overhead.)
My implementation (is for windows)
c++ aided by opencl object
is to take image of a disk like a texture file,
create a 128M chunk package (I can fit 2 packages per CU leaving around 1020M of lost / free total memory)
create pool of 60 threads and map each thread to each CU by opencl
then i leave most of work for opencl itself in background
while I create a ram disk that is working like raid0 between 60 small partitions with 256M. The internal operations handled by opencl like processing indices, a database operations and so on can be done at blazing 500-750GB/s (on rad7pro), if you try to load with CPU i.e. a game or something like mssql you are going to be running at best 35-50GB/s as each time you go to CPU you are slowed down to hell;
(I haven't shared the code on the git or anywhere - but maybe if i ever make it more user-friendly i would potentially sell it.)
There are ramdisk implementation as well as linux vramfs implementations available on git that you can test - but as mentioned i doubt you'll be getting more than 15-35GB/s on 1TB/s GPU. It just isn't that better than typical ram disk at this point on their implementation due to overhead from cpu - still its great (ramdisk or vramdisk) for operations that do lot of reads and writes daily - and you look to save money on ssd's. If you do indexing or have a lot of junk data being created daily its preferably to get more ram instead of enterprise sas ssd's and just dump that once a day onto storage as backup.
(git projects may/ or may not work - i helped myself start by looking at some of the code on git)