You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hama.apache.org by "Edward J. Yoon" <ed...@apache.org> on 2010/02/12 05:35:00 UTC

Interesting project, hama-mrcl (CUBLAS)

I just found this project -- http://code.google.com/p/mrcl/

CUBLAS is a BLAS library ported to CUDA, which enables the use of fast
computing by GPUs without direct operation of the CUDA drivers.

-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org

Re: Please, removme the Mail list!!!

Posted by "Edward J. Yoon" <ed...@apache.org>.
I just removed this guy from dev/user mailing list.

On Tue, Feb 23, 2010 at 11:08 AM, Mari <ma...@hotmail.com> wrote:
>
>
> Enviado.
>
>
> El 22/02/2010, a las 01:45, "Edward J. Yoon" <ed...@apache.org>
> escribió:
>
>> There is a report about performance of GPU-accelerated matrix
>> computation with M/R.
>>
>> http://mrcl.googlecode.com/svn/trunk/report/ (korean)
>>
>> In a nutshell, they performed matrix multiplication using Map/Reduce
>> (block algorithm for distributed computing) and GPU acceleration
>> technology. GPU technology was used for each local computation. And,
>> it implying that no improvement was made, because we can see a obvious
>> improvement of GPU acceleration when input is large.
>>
>> On Fri, Feb 12, 2010 at 1:35 PM, Edward J. Yoon <ed...@apache.org>
>> wrote:
>>>
>>> I just found this project -- http://code.google.com/p/mrcl/
>>>
>>> CUBLAS is a BLAS library ported to CUDA, which enables the use of fast
>>> computing by GPUs without direct operation of the CUDA drivers.
>>>
>>> --
>>> Best Regards, Edward J. Yoon @ NHN, corp.
>>> edwardyoon@apache.org
>>> http://blog.udanax.org
>>>
>>
>>
>>
>> --
>> Best Regards, Edward J. Yoon @ NHN, corp.
>> edwardyoon@apache.org
>> http://blog.udanax.org
>>
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org

Please, removme the Mail list!!!

Posted by Mari <ma...@hotmail.com>.

Enviado.


El 22/02/2010, a las 01:45, "Edward J. Yoon" <ed...@apache.org>  
escribió:

> There is a report about performance of GPU-accelerated matrix
> computation with M/R.
>
> http://mrcl.googlecode.com/svn/trunk/report/ (korean)
>
> In a nutshell, they performed matrix multiplication using Map/Reduce
> (block algorithm for distributed computing) and GPU acceleration
> technology. GPU technology was used for each local computation. And,
> it implying that no improvement was made, because we can see a obvious
> improvement of GPU acceleration when input is large.
>
> On Fri, Feb 12, 2010 at 1:35 PM, Edward J. Yoon  
> <ed...@apache.org> wrote:
>> I just found this project -- http://code.google.com/p/mrcl/
>>
>> CUBLAS is a BLAS library ported to CUDA, which enables the use of  
>> fast
>> computing by GPUs without direct operation of the CUDA drivers.
>>
>> --
>> Best Regards, Edward J. Yoon @ NHN, corp.
>> edwardyoon@apache.org
>> http://blog.udanax.org
>>
>
>
>
> -- 
> Best Regards, Edward J. Yoon @ NHN, corp.
> edwardyoon@apache.org
> http://blog.udanax.org
>

Re: Interesting project, hama-mrcl (CUBLAS)

Posted by "Edward J. Yoon" <ed...@apache.org>.
FYI, more detailed post:
http://blog.udanax.org/2010/02/interesting-project-hama-mrcl.html

On Mon, Feb 22, 2010 at 2:45 PM, Edward J. Yoon <ed...@apache.org> wrote:
> There is a report about performance of GPU-accelerated matrix
> computation with M/R.
>
> http://mrcl.googlecode.com/svn/trunk/report/ (korean)
>
> In a nutshell, they performed matrix multiplication using Map/Reduce
> (block algorithm for distributed computing) and GPU acceleration
> technology. GPU technology was used for each local computation. And,
> it implying that no improvement was made, because we can see a obvious
> improvement of GPU acceleration when input is large.
>
> On Fri, Feb 12, 2010 at 1:35 PM, Edward J. Yoon <ed...@apache.org> wrote:
>> I just found this project -- http://code.google.com/p/mrcl/
>>
>> CUBLAS is a BLAS library ported to CUDA, which enables the use of fast
>> computing by GPUs without direct operation of the CUDA drivers.
>>
>> --
>> Best Regards, Edward J. Yoon @ NHN, corp.
>> edwardyoon@apache.org
>> http://blog.udanax.org
>>
>
>
>
> --
> Best Regards, Edward J. Yoon @ NHN, corp.
> edwardyoon@apache.org
> http://blog.udanax.org
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org

Re: Interesting project, hama-mrcl (CUBLAS)

Posted by "Edward J. Yoon" <ed...@apache.org>.
There is a report about performance of GPU-accelerated matrix
computation with M/R.

http://mrcl.googlecode.com/svn/trunk/report/ (korean)

In a nutshell, they performed matrix multiplication using Map/Reduce
(block algorithm for distributed computing) and GPU acceleration
technology. GPU technology was used for each local computation. And,
it implying that no improvement was made, because we can see a obvious
improvement of GPU acceleration when input is large.

On Fri, Feb 12, 2010 at 1:35 PM, Edward J. Yoon <ed...@apache.org> wrote:
> I just found this project -- http://code.google.com/p/mrcl/
>
> CUBLAS is a BLAS library ported to CUDA, which enables the use of fast
> computing by GPUs without direct operation of the CUDA drivers.
>
> --
> Best Regards, Edward J. Yoon @ NHN, corp.
> edwardyoon@apache.org
> http://blog.udanax.org
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org