You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by anilkr <kr...@rediffmail.com> on 2010/01/10 08:49:56 UTC

Basic question about using C# with Hadoop filesystems

Currently my application uses C# with MONO on Linux to communicate to local
file systems (e.g. ext2, ext3). The basic operations are open a file,
write/read from file and close/delete the file. For this currently i use C#
native APIs to operate on the file.
   My Question is: If i install Hadoop file system on my Linux box. Then
what change i need to do to my existing functions so that they communicate
to hadoop file system to do basic operations on the file. Since Hadoop
infrastructure is based on Java, How any C# (with MONO) application will do
basic operations with Hadoop. Do the basic APIs in C# to operate on a file
(likr File.Open or File.Copy) work well with Hadoop filesystems too?

Also, If i want to open a file then do i need to mount to Hadoop filesystem
programmatically? If yes, how?

Thanks,
Anil
-- 
View this message in context: http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27096203.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Basic question about using C# with Hadoop filesystems

Posted by Andrew Purtell <ap...@apache.org>.
> From: Ryan Rawson <ry...@gmail.com>

> Just to clarify things, the "C" HDFS API is actually reverse-JNI and
> wraps the Java classes (I didn't know this is possible....)

Yeah, it links against libjvm.so so causes the embedding of a JVM into
the native process, which will load and run the HDFS client bytecode.
One must still have a functional JRE on the target and all of Hadoop
jars, lib, and conf. 

   - Andy


      


Re: Basic question about using C# with Hadoop filesystems

Posted by Ryan Rawson <ry...@gmail.com>.
Just to clarify things, the "C" HDFS API is actually reverse-JNI and
wraps the Java classes (I didn't know this is possible....), so
wrapping the "C" API would be a double wrap, which is kind of weird.
Not sure how performant it would be.

-ryan

On Sun, Jan 10, 2010 at 12:58 PM, Gibbon, Robert, VF-Group
<Ro...@vodafone.com> wrote:
>
> http://code.google.com/p/jar2ikvmc/
>
> This tool referenced from IKVM, referenced from the Mono project claims to resolve dependencies and convert java bytecode class files collected in JARs to Mono MSil DLLs. You might be able to use this to make a native MSil .NET port of the hbase client.
>
> HTH
> R
>
> -----Original Message-----
> From: Gibbon, Robert, VF-Group
> Sent: Sun 1/10/2010 9:30 PM
> To: hbase-user@hadoop.apache.org
> Subject: RE: Basic question about using C# with Hadoop filesystems
>
> Options for you:
>
> http://www.markhneedham.com/blog/2008/08/29/c-thrift-examples/
>
> Thrift is one of the main ways into HBase and HDFS. Above is an IDL for C# blog post.
>
> http://www.mono-project.com/using/relnotes/1.0-beta1.html
>
> Mono has a bytecode to MSil translator built in. Not likely to give high performance though, and I doubt it will work TBH
>
> http://caffeine.berlios.de
>
> Provides mono-java interop via JNI: no dynamic bytecode->MSil translation. I have never used it and the project looks quite dead but might still do what you want.
>
>
>
> -----Original Message-----
> From: Andrew Purtell [mailto:apurtell@apache.org]
> Sent: Sun 1/10/2010 8:45 PM
> To: hbase-user@hadoop.apache.org
> Subject: Re: Basic question about using C# with Hadoop filesystems
>
> Just to clarify:
>
>> On Windows especially context switching during I/O like that has a
>> high penalty.
>
> should read
>
>> Context switching during I/O like that has a penalty.
>
> I know we are talking about Mono on Linux here. After all the subject
> is FUSE. I forgot to fix that statement before hitting 'send'. :-)
>
>
>
> ----- Original Message ----
>> From: Andrew Purtell <ap...@apache.org>
>> To: hbase-user@hadoop.apache.org
>> Sent: Sun, January 10, 2010 11:30:42 AM
>> Subject: Re: Basic question about using C# with Hadoop filesystems
>>
>> Bear in mind that hdfs-fuse has something like a 30% performance impact
>> when compared with direct access via the Java API. The data path is
>> something like:
>>
>>     your app -> kernel -> libfuse -> JVM -> kernel -> HDFS
>>
>>     HDFS -> kernel-> JVM -> libfuse -> kernel -> your app
>>
>> On Windows especially context switching during I/O like that has a
>> high penalty. Maybe it would be better to bind the C libhdfs API
>> directly via a C# wrapper (see http://wiki.apache.org/hadoop/LibHDFS).
>> But, at that point, you have pulled the Java Virtual Machine into the
>> address space of your process and are bridging between Java land and
>> C# land over the JNI and the C# equivalent. So, at this point, why not
>> just use Java instead of C#? Or, just use C and limit the damage to
>> only one native-to-managed interface instead of two?
>>
>> The situation will change somewhat when/if all HDFS RPC is moved to
>> some RPC and serialization scheme which is truly language independent,
>> i.e. Avro. I have no idea when or if that will happen. Even if that
>> happens, as Ryan said before, the HDFS client is fat. Just talking
>> the RPC gets you maybe 25% of the way toward a functional HDFS
>> client.
>>
>> The bottom line is the Hadoop software ecosystem has a strong Java
>> affinity.
>>
>>    - Andy
>>
>>
>>
>> ----- Original Message ----
>> > From: Jean-Daniel Cryans
>> > To: hbase-user@hadoop.apache.org
>> > Sent: Sun, January 10, 2010 8:57:32 AM
>> > Subject: Re: Basic question about using C# with Hadoop filesystems
>> >
>> > http://code.google.com/p/hdfs-fuse/
>> >
>> > On Sun, Jan 10, 2010 at 7:36 AM, Aram Mkhitaryan
>> > wrote:
>> > > ah, sorry, forgot to mention, it's in hdfs-user mailing list
>> > > hdfs-user@hadoop.apache.org
>
>
>
>
>
>
>
>

RE: Basic question about using C# with Hadoop filesystems

Posted by "Gibbon, Robert, VF-Group" <Ro...@vodafone.com>.
http://code.google.com/p/jar2ikvmc/

This tool referenced from IKVM, referenced from the Mono project claims to resolve dependencies and convert java bytecode class files collected in JARs to Mono MSil DLLs. You might be able to use this to make a native MSil .NET port of the hbase client.

HTH
R

-----Original Message-----
From: Gibbon, Robert, VF-Group
Sent: Sun 1/10/2010 9:30 PM
To: hbase-user@hadoop.apache.org
Subject: RE: Basic question about using C# with Hadoop filesystems
 
Options for you:

http://www.markhneedham.com/blog/2008/08/29/c-thrift-examples/

Thrift is one of the main ways into HBase and HDFS. Above is an IDL for C# blog post.

http://www.mono-project.com/using/relnotes/1.0-beta1.html

Mono has a bytecode to MSil translator built in. Not likely to give high performance though, and I doubt it will work TBH

http://caffeine.berlios.de

Provides mono-java interop via JNI: no dynamic bytecode->MSil translation. I have never used it and the project looks quite dead but might still do what you want.



-----Original Message-----
From: Andrew Purtell [mailto:apurtell@apache.org]
Sent: Sun 1/10/2010 8:45 PM
To: hbase-user@hadoop.apache.org
Subject: Re: Basic question about using C# with Hadoop filesystems
 
Just to clarify:

> On Windows especially context switching during I/O like that has a 
> high penalty.

should read

> Context switching during I/O like that has a penalty.

I know we are talking about Mono on Linux here. After all the subject
is FUSE. I forgot to fix that statement before hitting 'send'. :-)



----- Original Message ----
> From: Andrew Purtell <ap...@apache.org>
> To: hbase-user@hadoop.apache.org
> Sent: Sun, January 10, 2010 11:30:42 AM
> Subject: Re: Basic question about using C# with Hadoop filesystems
> 
> Bear in mind that hdfs-fuse has something like a 30% performance impact
> when compared with direct access via the Java API. The data path is
> something like:
> 
>     your app -> kernel -> libfuse -> JVM -> kernel -> HDFS
> 
>     HDFS -> kernel-> JVM -> libfuse -> kernel -> your app
> 
> On Windows especially context switching during I/O like that has a 
> high penalty. Maybe it would be better to bind the C libhdfs API
> directly via a C# wrapper (see http://wiki.apache.org/hadoop/LibHDFS).
> But, at that point, you have pulled the Java Virtual Machine into the
> address space of your process and are bridging between Java land and
> C# land over the JNI and the C# equivalent. So, at this point, why not
> just use Java instead of C#? Or, just use C and limit the damage to
> only one native-to-managed interface instead of two?
> 
> The situation will change somewhat when/if all HDFS RPC is moved to
> some RPC and serialization scheme which is truly language independent,
> i.e. Avro. I have no idea when or if that will happen. Even if that
> happens, as Ryan said before, the HDFS client is fat. Just talking
> the RPC gets you maybe 25% of the way toward a functional HDFS
> client. 
> 
> The bottom line is the Hadoop software ecosystem has a strong Java
> affinity. 
> 
>    - Andy
> 
> 
> 
> ----- Original Message ----
> > From: Jean-Daniel Cryans 
> > To: hbase-user@hadoop.apache.org
> > Sent: Sun, January 10, 2010 8:57:32 AM
> > Subject: Re: Basic question about using C# with Hadoop filesystems
> > 
> > http://code.google.com/p/hdfs-fuse/
> > 
> > On Sun, Jan 10, 2010 at 7:36 AM, Aram Mkhitaryan
> > wrote:
> > > ah, sorry, forgot to mention, it's in hdfs-user mailing list
> > > hdfs-user@hadoop.apache.org



      




RE: Basic question about using C# with Hadoop filesystems

Posted by "Gibbon, Robert, VF-Group" <Ro...@vodafone.com>.
Options for you:

http://www.markhneedham.com/blog/2008/08/29/c-thrift-examples/

Thrift is one of the main ways into HBase and HDFS. Above is an IDL for C# blog post.

http://www.mono-project.com/using/relnotes/1.0-beta1.html

Mono has a bytecode to MSil translator built in. Not likely to give high performance though, and I doubt it will work TBH

http://caffeine.berlios.de

Provides mono-java interop via JNI: no dynamic bytecode->MSil translation. I have never used it and the project looks quite dead but might still do what you want.



-----Original Message-----
From: Andrew Purtell [mailto:apurtell@apache.org]
Sent: Sun 1/10/2010 8:45 PM
To: hbase-user@hadoop.apache.org
Subject: Re: Basic question about using C# with Hadoop filesystems
 
Just to clarify:

> On Windows especially context switching during I/O like that has a 
> high penalty.

should read

> Context switching during I/O like that has a penalty.

I know we are talking about Mono on Linux here. After all the subject
is FUSE. I forgot to fix that statement before hitting 'send'. :-)



----- Original Message ----
> From: Andrew Purtell <ap...@apache.org>
> To: hbase-user@hadoop.apache.org
> Sent: Sun, January 10, 2010 11:30:42 AM
> Subject: Re: Basic question about using C# with Hadoop filesystems
> 
> Bear in mind that hdfs-fuse has something like a 30% performance impact
> when compared with direct access via the Java API. The data path is
> something like:
> 
>     your app -> kernel -> libfuse -> JVM -> kernel -> HDFS
> 
>     HDFS -> kernel-> JVM -> libfuse -> kernel -> your app
> 
> On Windows especially context switching during I/O like that has a 
> high penalty. Maybe it would be better to bind the C libhdfs API
> directly via a C# wrapper (see http://wiki.apache.org/hadoop/LibHDFS).
> But, at that point, you have pulled the Java Virtual Machine into the
> address space of your process and are bridging between Java land and
> C# land over the JNI and the C# equivalent. So, at this point, why not
> just use Java instead of C#? Or, just use C and limit the damage to
> only one native-to-managed interface instead of two?
> 
> The situation will change somewhat when/if all HDFS RPC is moved to
> some RPC and serialization scheme which is truly language independent,
> i.e. Avro. I have no idea when or if that will happen. Even if that
> happens, as Ryan said before, the HDFS client is fat. Just talking
> the RPC gets you maybe 25% of the way toward a functional HDFS
> client. 
> 
> The bottom line is the Hadoop software ecosystem has a strong Java
> affinity. 
> 
>    - Andy
> 
> 
> 
> ----- Original Message ----
> > From: Jean-Daniel Cryans 
> > To: hbase-user@hadoop.apache.org
> > Sent: Sun, January 10, 2010 8:57:32 AM
> > Subject: Re: Basic question about using C# with Hadoop filesystems
> > 
> > http://code.google.com/p/hdfs-fuse/
> > 
> > On Sun, Jan 10, 2010 at 7:36 AM, Aram Mkhitaryan
> > wrote:
> > > ah, sorry, forgot to mention, it's in hdfs-user mailing list
> > > hdfs-user@hadoop.apache.org



      



Re: Basic question about using C# with Hadoop filesystems

Posted by Andrew Purtell <ap...@apache.org>.
Just to clarify:

> On Windows especially context switching during I/O like that has a 
> high penalty.

should read

> Context switching during I/O like that has a penalty.

I know we are talking about Mono on Linux here. After all the subject
is FUSE. I forgot to fix that statement before hitting 'send'. :-)



----- Original Message ----
> From: Andrew Purtell <ap...@apache.org>
> To: hbase-user@hadoop.apache.org
> Sent: Sun, January 10, 2010 11:30:42 AM
> Subject: Re: Basic question about using C# with Hadoop filesystems
> 
> Bear in mind that hdfs-fuse has something like a 30% performance impact
> when compared with direct access via the Java API. The data path is
> something like:
> 
>     your app -> kernel -> libfuse -> JVM -> kernel -> HDFS
> 
>     HDFS -> kernel-> JVM -> libfuse -> kernel -> your app
> 
> On Windows especially context switching during I/O like that has a 
> high penalty. Maybe it would be better to bind the C libhdfs API
> directly via a C# wrapper (see http://wiki.apache.org/hadoop/LibHDFS).
> But, at that point, you have pulled the Java Virtual Machine into the
> address space of your process and are bridging between Java land and
> C# land over the JNI and the C# equivalent. So, at this point, why not
> just use Java instead of C#? Or, just use C and limit the damage to
> only one native-to-managed interface instead of two?
> 
> The situation will change somewhat when/if all HDFS RPC is moved to
> some RPC and serialization scheme which is truly language independent,
> i.e. Avro. I have no idea when or if that will happen. Even if that
> happens, as Ryan said before, the HDFS client is fat. Just talking
> the RPC gets you maybe 25% of the way toward a functional HDFS
> client. 
> 
> The bottom line is the Hadoop software ecosystem has a strong Java
> affinity. 
> 
>    - Andy
> 
> 
> 
> ----- Original Message ----
> > From: Jean-Daniel Cryans 
> > To: hbase-user@hadoop.apache.org
> > Sent: Sun, January 10, 2010 8:57:32 AM
> > Subject: Re: Basic question about using C# with Hadoop filesystems
> > 
> > http://code.google.com/p/hdfs-fuse/
> > 
> > On Sun, Jan 10, 2010 at 7:36 AM, Aram Mkhitaryan
> > wrote:
> > > ah, sorry, forgot to mention, it's in hdfs-user mailing list
> > > hdfs-user@hadoop.apache.org



      


Re: Basic question about using C# with Hadoop filesystems

Posted by Andrew Purtell <ap...@apache.org>.
Bear in mind that hdfs-fuse has something like a 30% performance impact
when compared with direct access via the Java API. The data path is
something like:

    your app -> kernel -> libfuse -> JVM -> kernel -> HDFS

    HDFS -> kernel-> JVM -> libfuse -> kernel -> your app

On Windows especially context switching during I/O like that has a 
high penalty. Maybe it would be better to bind the C libhdfs API
directly via a C# wrapper (see http://wiki.apache.org/hadoop/LibHDFS).
But, at that point, you have pulled the Java Virtual Machine into the
address space of your process and are bridging between Java land and
C# land over the JNI and the C# equivalent. So, at this point, why not
just use Java instead of C#? Or, just use C and limit the damage to
only one native-to-managed interface instead of two?

The situation will change somewhat when/if all HDFS RPC is moved to
some RPC and serialization scheme which is truly language independent,
i.e. Avro. I have no idea when or if that will happen. Even if that
happens, as Ryan said before, the HDFS client is fat. Just talking
the RPC gets you maybe 25% of the way toward a functional HDFS
client. 

The bottom line is the Hadoop software ecosystem has a strong Java
affinity. 

   - Andy



----- Original Message ----
> From: Jean-Daniel Cryans <jd...@apache.org>
> To: hbase-user@hadoop.apache.org
> Sent: Sun, January 10, 2010 8:57:32 AM
> Subject: Re: Basic question about using C# with Hadoop filesystems
> 
> http://code.google.com/p/hdfs-fuse/
> 
> On Sun, Jan 10, 2010 at 7:36 AM, Aram Mkhitaryan
> wrote:
> > ah, sorry, forgot to mention, it's in hdfs-user mailing list
> > hdfs-user@hadoop.apache.org


      


Re: Basic question about using C# with Hadoop filesystems

Posted by Jean-Daniel Cryans <jd...@apache.org>.
http://code.google.com/p/hdfs-fuse/

On Sun, Jan 10, 2010 at 7:36 AM, Aram Mkhitaryan
<ar...@googlemail.com> wrote:
> ah, sorry, forgot to mention, it's in hdfs-user mailing list
> hdfs-user@hadoop.apache.org
>
>
> On Sun, Jan 10, 2010 at 7:17 PM, anilkr <kr...@rediffmail.com> wrote:
>>
>> Aram, where is the discussion about fuse-dfs, could not find the link in your
>> reply....
>>
>> thanks
>>
>>
>>
>> Aram Mkhitaryan wrote:
>>>
>>> here is a discussion with subject 'fuse-dfs',
>>> there they discuss problems with mounting hdfs,
>>> probably you can ask your question there
>>>
>>> On Sun, Jan 10, 2010 at 7:06 PM, Aram Mkhitaryan
>>> <ar...@googlemail.com> wrote:
>>>> I'm not an expert here, but
>>>> I read somewhere that it's possible to install a module in linux
>>>> which will allow you to mount hdfs folder as a standard linux folder
>>>> if I'm not wrong it was in one of the claudera's distributions,
>>>> probably you can find something there
>>>>
>>>>
>>>> On Sun, Jan 10, 2010 at 5:42 PM, anilkr <kr...@rediffmail.com> wrote:
>>>>>
>>>>> Thank you Ryan,
>>>>> My C# code is also on Linux (as it uses MONO framework on Linux).
>>>>> I understand that some bridging would be required. I am thinking that
>>>>> since
>>>>> Hadoop exposes some C APIs to operate with the Hadoop filesystem. If i
>>>>> write
>>>>> a wrapper in C and make a DLL out of it. Now my C# application will call
>>>>> this DLL for file operations and this DLL will call Hadoop APIs to
>>>>> operate
>>>>> with the files.
>>>>>
>>>>> Do you think this would be a proper way.
>>>>> thanks again
>>>>>
>>>>>
>>>>> Ryan Rawson wrote:
>>>>>>
>>>>>> Hadoop fs is not a typical filesystem, it is rpc oriented and uses a
>>>>>> thick
>>>>>> client in Java. To get access to it from c# would involve bridging to
>>>>>> Java
>>>>>> somehow. The c++ client does this.
>>>>>>
>>>>>> Most of hbase devs use Mac or Linux boxes. We aren't really experts in
>>>>>> windows tech. Maybe the main hadoop list could help you?
>>>>>>
>>>>>> On Jan 9, 2010 11:50 PM, "anilkr" <kr...@rediffmail.com> wrote:
>>>>>>
>>>>>>
>>>>>> Currently my application uses C# with MONO on Linux to communicate to
>>>>>> local
>>>>>> file systems (e.g. ext2, ext3). The basic operations are open a file,
>>>>>> write/read from file and close/delete the file. For this currently i
>>>>>> use
>>>>>> C#
>>>>>> native APIs to operate on the file.
>>>>>>   My Question is: If i install Hadoop file system on my Linux box. Then
>>>>>> what change i need to do to my existing functions so that they
>>>>>> communicate
>>>>>> to hadoop file system to do basic operations on the file. Since Hadoop
>>>>>> infrastructure is based on Java, How any C# (with MONO) application
>>>>>> will
>>>>>> do
>>>>>> basic operations with Hadoop. Do the basic APIs in C# to operate on a
>>>>>> file
>>>>>> (likr File.Open or File.Copy) work well with Hadoop filesystems too?
>>>>>>
>>>>>> Also, If i want to open a file then do i need to mount to Hadoop
>>>>>> filesystem
>>>>>> programmatically? If yes, how?
>>>>>>
>>>>>> Thanks,
>>>>>> Anil
>>>>>> --
>>>>>> View this message in context:
>>>>>> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27096203.html
>>>>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27098395.html
>>>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>> --
>> View this message in context: http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27099222.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>

Re: Basic question about using C# with Hadoop filesystems

Posted by Aram Mkhitaryan <ar...@googlemail.com>.
ah, sorry, forgot to mention, it's in hdfs-user mailing list
hdfs-user@hadoop.apache.org


On Sun, Jan 10, 2010 at 7:17 PM, anilkr <kr...@rediffmail.com> wrote:
>
> Aram, where is the discussion about fuse-dfs, could not find the link in your
> reply....
>
> thanks
>
>
>
> Aram Mkhitaryan wrote:
>>
>> here is a discussion with subject 'fuse-dfs',
>> there they discuss problems with mounting hdfs,
>> probably you can ask your question there
>>
>> On Sun, Jan 10, 2010 at 7:06 PM, Aram Mkhitaryan
>> <ar...@googlemail.com> wrote:
>>> I'm not an expert here, but
>>> I read somewhere that it's possible to install a module in linux
>>> which will allow you to mount hdfs folder as a standard linux folder
>>> if I'm not wrong it was in one of the claudera's distributions,
>>> probably you can find something there
>>>
>>>
>>> On Sun, Jan 10, 2010 at 5:42 PM, anilkr <kr...@rediffmail.com> wrote:
>>>>
>>>> Thank you Ryan,
>>>> My C# code is also on Linux (as it uses MONO framework on Linux).
>>>> I understand that some bridging would be required. I am thinking that
>>>> since
>>>> Hadoop exposes some C APIs to operate with the Hadoop filesystem. If i
>>>> write
>>>> a wrapper in C and make a DLL out of it. Now my C# application will call
>>>> this DLL for file operations and this DLL will call Hadoop APIs to
>>>> operate
>>>> with the files.
>>>>
>>>> Do you think this would be a proper way.
>>>> thanks again
>>>>
>>>>
>>>> Ryan Rawson wrote:
>>>>>
>>>>> Hadoop fs is not a typical filesystem, it is rpc oriented and uses a
>>>>> thick
>>>>> client in Java. To get access to it from c# would involve bridging to
>>>>> Java
>>>>> somehow. The c++ client does this.
>>>>>
>>>>> Most of hbase devs use Mac or Linux boxes. We aren't really experts in
>>>>> windows tech. Maybe the main hadoop list could help you?
>>>>>
>>>>> On Jan 9, 2010 11:50 PM, "anilkr" <kr...@rediffmail.com> wrote:
>>>>>
>>>>>
>>>>> Currently my application uses C# with MONO on Linux to communicate to
>>>>> local
>>>>> file systems (e.g. ext2, ext3). The basic operations are open a file,
>>>>> write/read from file and close/delete the file. For this currently i
>>>>> use
>>>>> C#
>>>>> native APIs to operate on the file.
>>>>>   My Question is: If i install Hadoop file system on my Linux box. Then
>>>>> what change i need to do to my existing functions so that they
>>>>> communicate
>>>>> to hadoop file system to do basic operations on the file. Since Hadoop
>>>>> infrastructure is based on Java, How any C# (with MONO) application
>>>>> will
>>>>> do
>>>>> basic operations with Hadoop. Do the basic APIs in C# to operate on a
>>>>> file
>>>>> (likr File.Open or File.Copy) work well with Hadoop filesystems too?
>>>>>
>>>>> Also, If i want to open a file then do i need to mount to Hadoop
>>>>> filesystem
>>>>> programmatically? If yes, how?
>>>>>
>>>>> Thanks,
>>>>> Anil
>>>>> --
>>>>> View this message in context:
>>>>> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27096203.html
>>>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>>>
>>>>>
>>>>
>>>> --
>>>> View this message in context:
>>>> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27098395.html
>>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>>
>>>>
>>>
>>
>>
>
> --
> View this message in context: http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27099222.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>

Re: Basic question about using C# with Hadoop filesystems

Posted by anilkr <kr...@rediffmail.com>.
Aram, where is the discussion about fuse-dfs, could not find the link in your
reply....

thanks



Aram Mkhitaryan wrote:
> 
> here is a discussion with subject 'fuse-dfs',
> there they discuss problems with mounting hdfs,
> probably you can ask your question there
> 
> On Sun, Jan 10, 2010 at 7:06 PM, Aram Mkhitaryan
> <ar...@googlemail.com> wrote:
>> I'm not an expert here, but
>> I read somewhere that it's possible to install a module in linux
>> which will allow you to mount hdfs folder as a standard linux folder
>> if I'm not wrong it was in one of the claudera's distributions,
>> probably you can find something there
>>
>>
>> On Sun, Jan 10, 2010 at 5:42 PM, anilkr <kr...@rediffmail.com> wrote:
>>>
>>> Thank you Ryan,
>>> My C# code is also on Linux (as it uses MONO framework on Linux).
>>> I understand that some bridging would be required. I am thinking that
>>> since
>>> Hadoop exposes some C APIs to operate with the Hadoop filesystem. If i
>>> write
>>> a wrapper in C and make a DLL out of it. Now my C# application will call
>>> this DLL for file operations and this DLL will call Hadoop APIs to
>>> operate
>>> with the files.
>>>
>>> Do you think this would be a proper way.
>>> thanks again
>>>
>>>
>>> Ryan Rawson wrote:
>>>>
>>>> Hadoop fs is not a typical filesystem, it is rpc oriented and uses a
>>>> thick
>>>> client in Java. To get access to it from c# would involve bridging to
>>>> Java
>>>> somehow. The c++ client does this.
>>>>
>>>> Most of hbase devs use Mac or Linux boxes. We aren't really experts in
>>>> windows tech. Maybe the main hadoop list could help you?
>>>>
>>>> On Jan 9, 2010 11:50 PM, "anilkr" <kr...@rediffmail.com> wrote:
>>>>
>>>>
>>>> Currently my application uses C# with MONO on Linux to communicate to
>>>> local
>>>> file systems (e.g. ext2, ext3). The basic operations are open a file,
>>>> write/read from file and close/delete the file. For this currently i
>>>> use
>>>> C#
>>>> native APIs to operate on the file.
>>>>   My Question is: If i install Hadoop file system on my Linux box. Then
>>>> what change i need to do to my existing functions so that they
>>>> communicate
>>>> to hadoop file system to do basic operations on the file. Since Hadoop
>>>> infrastructure is based on Java, How any C# (with MONO) application
>>>> will
>>>> do
>>>> basic operations with Hadoop. Do the basic APIs in C# to operate on a
>>>> file
>>>> (likr File.Open or File.Copy) work well with Hadoop filesystems too?
>>>>
>>>> Also, If i want to open a file then do i need to mount to Hadoop
>>>> filesystem
>>>> programmatically? If yes, how?
>>>>
>>>> Thanks,
>>>> Anil
>>>> --
>>>> View this message in context:
>>>> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27096203.html
>>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>>
>>>>
>>>
>>> --
>>> View this message in context:
>>> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27098395.html
>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>
>>>
>>
> 
> 

-- 
View this message in context: http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27099222.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Basic question about using C# with Hadoop filesystems

Posted by Aram Mkhitaryan <ar...@googlemail.com>.
here is a discussion with subject 'fuse-dfs',
there they discuss problems with mounting hdfs,
probably you can ask your question there

On Sun, Jan 10, 2010 at 7:06 PM, Aram Mkhitaryan
<ar...@googlemail.com> wrote:
> I'm not an expert here, but
> I read somewhere that it's possible to install a module in linux
> which will allow you to mount hdfs folder as a standard linux folder
> if I'm not wrong it was in one of the claudera's distributions,
> probably you can find something there
>
>
> On Sun, Jan 10, 2010 at 5:42 PM, anilkr <kr...@rediffmail.com> wrote:
>>
>> Thank you Ryan,
>> My C# code is also on Linux (as it uses MONO framework on Linux).
>> I understand that some bridging would be required. I am thinking that since
>> Hadoop exposes some C APIs to operate with the Hadoop filesystem. If i write
>> a wrapper in C and make a DLL out of it. Now my C# application will call
>> this DLL for file operations and this DLL will call Hadoop APIs to operate
>> with the files.
>>
>> Do you think this would be a proper way.
>> thanks again
>>
>>
>> Ryan Rawson wrote:
>>>
>>> Hadoop fs is not a typical filesystem, it is rpc oriented and uses a thick
>>> client in Java. To get access to it from c# would involve bridging to Java
>>> somehow. The c++ client does this.
>>>
>>> Most of hbase devs use Mac or Linux boxes. We aren't really experts in
>>> windows tech. Maybe the main hadoop list could help you?
>>>
>>> On Jan 9, 2010 11:50 PM, "anilkr" <kr...@rediffmail.com> wrote:
>>>
>>>
>>> Currently my application uses C# with MONO on Linux to communicate to
>>> local
>>> file systems (e.g. ext2, ext3). The basic operations are open a file,
>>> write/read from file and close/delete the file. For this currently i use
>>> C#
>>> native APIs to operate on the file.
>>>   My Question is: If i install Hadoop file system on my Linux box. Then
>>> what change i need to do to my existing functions so that they communicate
>>> to hadoop file system to do basic operations on the file. Since Hadoop
>>> infrastructure is based on Java, How any C# (with MONO) application will
>>> do
>>> basic operations with Hadoop. Do the basic APIs in C# to operate on a file
>>> (likr File.Open or File.Copy) work well with Hadoop filesystems too?
>>>
>>> Also, If i want to open a file then do i need to mount to Hadoop
>>> filesystem
>>> programmatically? If yes, how?
>>>
>>> Thanks,
>>> Anil
>>> --
>>> View this message in context:
>>> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27096203.html
>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>
>>>
>>
>> --
>> View this message in context: http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27098395.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>

Re: Basic question about using C# with Hadoop filesystems

Posted by Aram Mkhitaryan <ar...@googlemail.com>.
I'm not an expert here, but
I read somewhere that it's possible to install a module in linux
which will allow you to mount hdfs folder as a standard linux folder
if I'm not wrong it was in one of the claudera's distributions,
probably you can find something there


On Sun, Jan 10, 2010 at 5:42 PM, anilkr <kr...@rediffmail.com> wrote:
>
> Thank you Ryan,
> My C# code is also on Linux (as it uses MONO framework on Linux).
> I understand that some bridging would be required. I am thinking that since
> Hadoop exposes some C APIs to operate with the Hadoop filesystem. If i write
> a wrapper in C and make a DLL out of it. Now my C# application will call
> this DLL for file operations and this DLL will call Hadoop APIs to operate
> with the files.
>
> Do you think this would be a proper way.
> thanks again
>
>
> Ryan Rawson wrote:
>>
>> Hadoop fs is not a typical filesystem, it is rpc oriented and uses a thick
>> client in Java. To get access to it from c# would involve bridging to Java
>> somehow. The c++ client does this.
>>
>> Most of hbase devs use Mac or Linux boxes. We aren't really experts in
>> windows tech. Maybe the main hadoop list could help you?
>>
>> On Jan 9, 2010 11:50 PM, "anilkr" <kr...@rediffmail.com> wrote:
>>
>>
>> Currently my application uses C# with MONO on Linux to communicate to
>> local
>> file systems (e.g. ext2, ext3). The basic operations are open a file,
>> write/read from file and close/delete the file. For this currently i use
>> C#
>> native APIs to operate on the file.
>>   My Question is: If i install Hadoop file system on my Linux box. Then
>> what change i need to do to my existing functions so that they communicate
>> to hadoop file system to do basic operations on the file. Since Hadoop
>> infrastructure is based on Java, How any C# (with MONO) application will
>> do
>> basic operations with Hadoop. Do the basic APIs in C# to operate on a file
>> (likr File.Open or File.Copy) work well with Hadoop filesystems too?
>>
>> Also, If i want to open a file then do i need to mount to Hadoop
>> filesystem
>> programmatically? If yes, how?
>>
>> Thanks,
>> Anil
>> --
>> View this message in context:
>> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27096203.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>
> --
> View this message in context: http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27098395.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>

Re: Basic question about using C# with Hadoop filesystems

Posted by anilkr <kr...@rediffmail.com>.
Thank you Ryan,
My C# code is also on Linux (as it uses MONO framework on Linux).
I understand that some bridging would be required. I am thinking that since
Hadoop exposes some C APIs to operate with the Hadoop filesystem. If i write
a wrapper in C and make a DLL out of it. Now my C# application will call
this DLL for file operations and this DLL will call Hadoop APIs to operate
with the files.

Do you think this would be a proper way.
thanks again


Ryan Rawson wrote:
> 
> Hadoop fs is not a typical filesystem, it is rpc oriented and uses a thick
> client in Java. To get access to it from c# would involve bridging to Java
> somehow. The c++ client does this.
> 
> Most of hbase devs use Mac or Linux boxes. We aren't really experts in
> windows tech. Maybe the main hadoop list could help you?
> 
> On Jan 9, 2010 11:50 PM, "anilkr" <kr...@rediffmail.com> wrote:
> 
> 
> Currently my application uses C# with MONO on Linux to communicate to
> local
> file systems (e.g. ext2, ext3). The basic operations are open a file,
> write/read from file and close/delete the file. For this currently i use
> C#
> native APIs to operate on the file.
>   My Question is: If i install Hadoop file system on my Linux box. Then
> what change i need to do to my existing functions so that they communicate
> to hadoop file system to do basic operations on the file. Since Hadoop
> infrastructure is based on Java, How any C# (with MONO) application will
> do
> basic operations with Hadoop. Do the basic APIs in C# to operate on a file
> (likr File.Open or File.Copy) work well with Hadoop filesystems too?
> 
> Also, If i want to open a file then do i need to mount to Hadoop
> filesystem
> programmatically? If yes, how?
> 
> Thanks,
> Anil
> --
> View this message in context:
> http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27096203.html
> Sent from the HBase User mailing list archive at Nabble.com.
> 
> 

-- 
View this message in context: http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27098395.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Basic question about using C# with Hadoop filesystems

Posted by Ryan Rawson <ry...@gmail.com>.
Hadoop fs is not a typical filesystem, it is rpc oriented and uses a thick
client in Java. To get access to it from c# would involve bridging to Java
somehow. The c++ client does this.

Most of hbase devs use Mac or Linux boxes. We aren't really experts in
windows tech. Maybe the main hadoop list could help you?

On Jan 9, 2010 11:50 PM, "anilkr" <kr...@rediffmail.com> wrote:


Currently my application uses C# with MONO on Linux to communicate to local
file systems (e.g. ext2, ext3). The basic operations are open a file,
write/read from file and close/delete the file. For this currently i use C#
native APIs to operate on the file.
  My Question is: If i install Hadoop file system on my Linux box. Then
what change i need to do to my existing functions so that they communicate
to hadoop file system to do basic operations on the file. Since Hadoop
infrastructure is based on Java, How any C# (with MONO) application will do
basic operations with Hadoop. Do the basic APIs in C# to operate on a file
(likr File.Open or File.Copy) work well with Hadoop filesystems too?

Also, If i want to open a file then do i need to mount to Hadoop filesystem
programmatically? If yes, how?

Thanks,
Anil
--
View this message in context:
http://old.nabble.com/Basic-question-about-using-C--with-Hadoop-filesystems-tp27096203p27096203.html
Sent from the HBase User mailing list archive at Nabble.com.