You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Cheng Cheng <ch...@purdue.edu> on 2014/05/15 07:24:17 UTC

fuse-dfs on hadoop-2.2.0

Hi All,

With hadoop-2.2.0, I tried to mount hdfs by using fuse-dfs. I have successfully compiled hadoop-2.2.0.tar.gz and fuse-dfs with "mvn package -Pdist,native -DskipTests -Dtar -e -X” from hadoop-2.2.0-src.

After I deployed hadoop-2.2.0.tar.gz and run single mode HDFS, I tried to run following command to mount hdfs:

------------------------------------------------
LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar bin/fuse_dfs -d dfs://localhost:8020 /mnt/hdfs
------------------------------------------------

However, I got failed with error message:

------------------------------------------------
INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:115 Ignoring option -d
INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs
FUSE library version: 2.8.3
nullpath_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
INIT: 7.13
flags=0x0000b07b
max_readahead=0x00020000
INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:98 Mounting with options: [ protected=(NULL), nn_uri=hdfs://nebula-vm14.cs.purdue.edu:8020, nn_port=0, debug=0, read_only=0, initchecks=0, no_permissions=0, usetrash=0, entry_timeout=60, attribute_timeout=60, rdbuffer_size=10485760, direct_io=0 ]
loadFileSystems error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsConfGetInt(hadoop.fuse.timer.period): new Configuration error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
Unable to determine the configured value for hadoop.fuse.timer.period.ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:134 FATAL: dfs_init: fuseConnectInit failed with error -22!
ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:34 LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server
ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:35 CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
------------------------------------------------

It looks like the configuration is not correctly loaded. But I have already configured  “hadoop.fuse.timer.period" and “hadoop.fuse.connection.timeout" in $HADOOP_HOME/etc/hadoop/hdfs-site.xml.

Can any one share some hints on how to fix this? How can I let fuse-dfs correctly load the configuration?

Thanks in advance!
Cheng

Re: fuse-dfs on hadoop-2.2.0

Posted by Harsh J <ha...@cloudera.com>.
I forgot to send this earlier, but here's an answer with added links
that may help: http://stackoverflow.com/a/21655102/1660002

On Sat, May 17, 2014 at 9:54 AM, Harsh J <ha...@cloudera.com> wrote:
> The issue here is that JNI doesn't like wildcards in the classpath
> string - it does not evaluate them the same way the regular runtime
> does. Try placing a full list of explicit jars on the classpath and it
> will not throw that Class-Not-Found error anymore.
>
> On Thu, May 15, 2014 at 10:54 AM, Cheng Cheng <ch...@purdue.edu> wrote:
>> Hi All,
>>
>> With hadoop-2.2.0, I tried to mount hdfs by using fuse-dfs. I have successfully compiled hadoop-2.2.0.tar.gz and fuse-dfs with "mvn package -Pdist,native -DskipTests -Dtar -e -X” from hadoop-2.2.0-src.
>>
>> After I deployed hadoop-2.2.0.tar.gz and run single mode HDFS, I tried to run following command to mount hdfs:
>>
>> ------------------------------------------------
>> LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar bin/fuse_dfs -d dfs://localhost:8020 /mnt/hdfs
>> ------------------------------------------------
>>
>> However, I got failed with error message:
>>
>> ------------------------------------------------
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:115 Ignoring option -d
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs
>> FUSE library version: 2.8.3
>> nullpath_ok: 0
>> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
>> INIT: 7.13
>> flags=0x0000b07b
>> max_readahead=0x00020000
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:98 Mounting with options: [ protected=(NULL), nn_uri=hdfs://nebula-vm14.cs.purdue.edu:8020, nn_port=0, debug=0, read_only=0, initchecks=0, no_permissions=0, usetrash=0, entry_timeout=60, attribute_timeout=60, rdbuffer_size=10485760, direct_io=0 ]
>> loadFileSystems error:
>> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
>> hdfsConfGetInt(hadoop.fuse.timer.period): new Configuration error:
>> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
>> Unable to determine the configured value for hadoop.fuse.timer.period.ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:134 FATAL: dfs_init: fuseConnectInit failed with error -22!
>> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:34 LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server
>> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:35 CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
>> ------------------------------------------------
>>
>> It looks like the configuration is not correctly loaded. But I have already configured  “hadoop.fuse.timer.period" and “hadoop.fuse.connection.timeout" in $HADOOP_HOME/etc/hadoop/hdfs-site.xml.
>>
>> Can any one share some hints on how to fix this? How can I let fuse-dfs correctly load the configuration?
>>
>> Thanks in advance!
>> Cheng
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: fuse-dfs on hadoop-2.2.0

Posted by Harsh J <ha...@cloudera.com>.
I forgot to send this earlier, but here's an answer with added links
that may help: http://stackoverflow.com/a/21655102/1660002

On Sat, May 17, 2014 at 9:54 AM, Harsh J <ha...@cloudera.com> wrote:
> The issue here is that JNI doesn't like wildcards in the classpath
> string - it does not evaluate them the same way the regular runtime
> does. Try placing a full list of explicit jars on the classpath and it
> will not throw that Class-Not-Found error anymore.
>
> On Thu, May 15, 2014 at 10:54 AM, Cheng Cheng <ch...@purdue.edu> wrote:
>> Hi All,
>>
>> With hadoop-2.2.0, I tried to mount hdfs by using fuse-dfs. I have successfully compiled hadoop-2.2.0.tar.gz and fuse-dfs with "mvn package -Pdist,native -DskipTests -Dtar -e -X” from hadoop-2.2.0-src.
>>
>> After I deployed hadoop-2.2.0.tar.gz and run single mode HDFS, I tried to run following command to mount hdfs:
>>
>> ------------------------------------------------
>> LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar bin/fuse_dfs -d dfs://localhost:8020 /mnt/hdfs
>> ------------------------------------------------
>>
>> However, I got failed with error message:
>>
>> ------------------------------------------------
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:115 Ignoring option -d
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs
>> FUSE library version: 2.8.3
>> nullpath_ok: 0
>> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
>> INIT: 7.13
>> flags=0x0000b07b
>> max_readahead=0x00020000
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:98 Mounting with options: [ protected=(NULL), nn_uri=hdfs://nebula-vm14.cs.purdue.edu:8020, nn_port=0, debug=0, read_only=0, initchecks=0, no_permissions=0, usetrash=0, entry_timeout=60, attribute_timeout=60, rdbuffer_size=10485760, direct_io=0 ]
>> loadFileSystems error:
>> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
>> hdfsConfGetInt(hadoop.fuse.timer.period): new Configuration error:
>> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
>> Unable to determine the configured value for hadoop.fuse.timer.period.ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:134 FATAL: dfs_init: fuseConnectInit failed with error -22!
>> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:34 LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server
>> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:35 CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
>> ------------------------------------------------
>>
>> It looks like the configuration is not correctly loaded. But I have already configured  “hadoop.fuse.timer.period" and “hadoop.fuse.connection.timeout" in $HADOOP_HOME/etc/hadoop/hdfs-site.xml.
>>
>> Can any one share some hints on how to fix this? How can I let fuse-dfs correctly load the configuration?
>>
>> Thanks in advance!
>> Cheng
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: fuse-dfs on hadoop-2.2.0

Posted by Harsh J <ha...@cloudera.com>.
I forgot to send this earlier, but here's an answer with added links
that may help: http://stackoverflow.com/a/21655102/1660002

On Sat, May 17, 2014 at 9:54 AM, Harsh J <ha...@cloudera.com> wrote:
> The issue here is that JNI doesn't like wildcards in the classpath
> string - it does not evaluate them the same way the regular runtime
> does. Try placing a full list of explicit jars on the classpath and it
> will not throw that Class-Not-Found error anymore.
>
> On Thu, May 15, 2014 at 10:54 AM, Cheng Cheng <ch...@purdue.edu> wrote:
>> Hi All,
>>
>> With hadoop-2.2.0, I tried to mount hdfs by using fuse-dfs. I have successfully compiled hadoop-2.2.0.tar.gz and fuse-dfs with "mvn package -Pdist,native -DskipTests -Dtar -e -X” from hadoop-2.2.0-src.
>>
>> After I deployed hadoop-2.2.0.tar.gz and run single mode HDFS, I tried to run following command to mount hdfs:
>>
>> ------------------------------------------------
>> LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar bin/fuse_dfs -d dfs://localhost:8020 /mnt/hdfs
>> ------------------------------------------------
>>
>> However, I got failed with error message:
>>
>> ------------------------------------------------
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:115 Ignoring option -d
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs
>> FUSE library version: 2.8.3
>> nullpath_ok: 0
>> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
>> INIT: 7.13
>> flags=0x0000b07b
>> max_readahead=0x00020000
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:98 Mounting with options: [ protected=(NULL), nn_uri=hdfs://nebula-vm14.cs.purdue.edu:8020, nn_port=0, debug=0, read_only=0, initchecks=0, no_permissions=0, usetrash=0, entry_timeout=60, attribute_timeout=60, rdbuffer_size=10485760, direct_io=0 ]
>> loadFileSystems error:
>> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
>> hdfsConfGetInt(hadoop.fuse.timer.period): new Configuration error:
>> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
>> Unable to determine the configured value for hadoop.fuse.timer.period.ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:134 FATAL: dfs_init: fuseConnectInit failed with error -22!
>> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:34 LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server
>> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:35 CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
>> ------------------------------------------------
>>
>> It looks like the configuration is not correctly loaded. But I have already configured  “hadoop.fuse.timer.period" and “hadoop.fuse.connection.timeout" in $HADOOP_HOME/etc/hadoop/hdfs-site.xml.
>>
>> Can any one share some hints on how to fix this? How can I let fuse-dfs correctly load the configuration?
>>
>> Thanks in advance!
>> Cheng
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: fuse-dfs on hadoop-2.2.0

Posted by Harsh J <ha...@cloudera.com>.
I forgot to send this earlier, but here's an answer with added links
that may help: http://stackoverflow.com/a/21655102/1660002

On Sat, May 17, 2014 at 9:54 AM, Harsh J <ha...@cloudera.com> wrote:
> The issue here is that JNI doesn't like wildcards in the classpath
> string - it does not evaluate them the same way the regular runtime
> does. Try placing a full list of explicit jars on the classpath and it
> will not throw that Class-Not-Found error anymore.
>
> On Thu, May 15, 2014 at 10:54 AM, Cheng Cheng <ch...@purdue.edu> wrote:
>> Hi All,
>>
>> With hadoop-2.2.0, I tried to mount hdfs by using fuse-dfs. I have successfully compiled hadoop-2.2.0.tar.gz and fuse-dfs with "mvn package -Pdist,native -DskipTests -Dtar -e -X” from hadoop-2.2.0-src.
>>
>> After I deployed hadoop-2.2.0.tar.gz and run single mode HDFS, I tried to run following command to mount hdfs:
>>
>> ------------------------------------------------
>> LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar bin/fuse_dfs -d dfs://localhost:8020 /mnt/hdfs
>> ------------------------------------------------
>>
>> However, I got failed with error message:
>>
>> ------------------------------------------------
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:115 Ignoring option -d
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs
>> FUSE library version: 2.8.3
>> nullpath_ok: 0
>> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
>> INIT: 7.13
>> flags=0x0000b07b
>> max_readahead=0x00020000
>> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:98 Mounting with options: [ protected=(NULL), nn_uri=hdfs://nebula-vm14.cs.purdue.edu:8020, nn_port=0, debug=0, read_only=0, initchecks=0, no_permissions=0, usetrash=0, entry_timeout=60, attribute_timeout=60, rdbuffer_size=10485760, direct_io=0 ]
>> loadFileSystems error:
>> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
>> hdfsConfGetInt(hadoop.fuse.timer.period): new Configuration error:
>> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
>> Unable to determine the configured value for hadoop.fuse.timer.period.ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:134 FATAL: dfs_init: fuseConnectInit failed with error -22!
>> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:34 LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server
>> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:35 CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
>> ------------------------------------------------
>>
>> It looks like the configuration is not correctly loaded. But I have already configured  “hadoop.fuse.timer.period" and “hadoop.fuse.connection.timeout" in $HADOOP_HOME/etc/hadoop/hdfs-site.xml.
>>
>> Can any one share some hints on how to fix this? How can I let fuse-dfs correctly load the configuration?
>>
>> Thanks in advance!
>> Cheng
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: fuse-dfs on hadoop-2.2.0

Posted by Harsh J <ha...@cloudera.com>.
The issue here is that JNI doesn't like wildcards in the classpath
string - it does not evaluate them the same way the regular runtime
does. Try placing a full list of explicit jars on the classpath and it
will not throw that Class-Not-Found error anymore.

On Thu, May 15, 2014 at 10:54 AM, Cheng Cheng <ch...@purdue.edu> wrote:
> Hi All,
>
> With hadoop-2.2.0, I tried to mount hdfs by using fuse-dfs. I have successfully compiled hadoop-2.2.0.tar.gz and fuse-dfs with "mvn package -Pdist,native -DskipTests -Dtar -e -X” from hadoop-2.2.0-src.
>
> After I deployed hadoop-2.2.0.tar.gz and run single mode HDFS, I tried to run following command to mount hdfs:
>
> ------------------------------------------------
> LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar bin/fuse_dfs -d dfs://localhost:8020 /mnt/hdfs
> ------------------------------------------------
>
> However, I got failed with error message:
>
> ------------------------------------------------
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:115 Ignoring option -d
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs
> FUSE library version: 2.8.3
> nullpath_ok: 0
> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
> INIT: 7.13
> flags=0x0000b07b
> max_readahead=0x00020000
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:98 Mounting with options: [ protected=(NULL), nn_uri=hdfs://nebula-vm14.cs.purdue.edu:8020, nn_port=0, debug=0, read_only=0, initchecks=0, no_permissions=0, usetrash=0, entry_timeout=60, attribute_timeout=60, rdbuffer_size=10485760, direct_io=0 ]
> loadFileSystems error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
> hdfsConfGetInt(hadoop.fuse.timer.period): new Configuration error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
> Unable to determine the configured value for hadoop.fuse.timer.period.ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:134 FATAL: dfs_init: fuseConnectInit failed with error -22!
> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:34 LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server
> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:35 CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
> ------------------------------------------------
>
> It looks like the configuration is not correctly loaded. But I have already configured  “hadoop.fuse.timer.period" and “hadoop.fuse.connection.timeout" in $HADOOP_HOME/etc/hadoop/hdfs-site.xml.
>
> Can any one share some hints on how to fix this? How can I let fuse-dfs correctly load the configuration?
>
> Thanks in advance!
> Cheng



-- 
Harsh J

Re: fuse-dfs on hadoop-2.2.0

Posted by Harsh J <ha...@cloudera.com>.
The issue here is that JNI doesn't like wildcards in the classpath
string - it does not evaluate them the same way the regular runtime
does. Try placing a full list of explicit jars on the classpath and it
will not throw that Class-Not-Found error anymore.

On Thu, May 15, 2014 at 10:54 AM, Cheng Cheng <ch...@purdue.edu> wrote:
> Hi All,
>
> With hadoop-2.2.0, I tried to mount hdfs by using fuse-dfs. I have successfully compiled hadoop-2.2.0.tar.gz and fuse-dfs with "mvn package -Pdist,native -DskipTests -Dtar -e -X” from hadoop-2.2.0-src.
>
> After I deployed hadoop-2.2.0.tar.gz and run single mode HDFS, I tried to run following command to mount hdfs:
>
> ------------------------------------------------
> LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar bin/fuse_dfs -d dfs://localhost:8020 /mnt/hdfs
> ------------------------------------------------
>
> However, I got failed with error message:
>
> ------------------------------------------------
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:115 Ignoring option -d
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs
> FUSE library version: 2.8.3
> nullpath_ok: 0
> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
> INIT: 7.13
> flags=0x0000b07b
> max_readahead=0x00020000
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:98 Mounting with options: [ protected=(NULL), nn_uri=hdfs://nebula-vm14.cs.purdue.edu:8020, nn_port=0, debug=0, read_only=0, initchecks=0, no_permissions=0, usetrash=0, entry_timeout=60, attribute_timeout=60, rdbuffer_size=10485760, direct_io=0 ]
> loadFileSystems error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
> hdfsConfGetInt(hadoop.fuse.timer.period): new Configuration error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
> Unable to determine the configured value for hadoop.fuse.timer.period.ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:134 FATAL: dfs_init: fuseConnectInit failed with error -22!
> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:34 LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server
> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:35 CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
> ------------------------------------------------
>
> It looks like the configuration is not correctly loaded. But I have already configured  “hadoop.fuse.timer.period" and “hadoop.fuse.connection.timeout" in $HADOOP_HOME/etc/hadoop/hdfs-site.xml.
>
> Can any one share some hints on how to fix this? How can I let fuse-dfs correctly load the configuration?
>
> Thanks in advance!
> Cheng



-- 
Harsh J

Re: fuse-dfs on hadoop-2.2.0

Posted by Harsh J <ha...@cloudera.com>.
The issue here is that JNI doesn't like wildcards in the classpath
string - it does not evaluate them the same way the regular runtime
does. Try placing a full list of explicit jars on the classpath and it
will not throw that Class-Not-Found error anymore.

On Thu, May 15, 2014 at 10:54 AM, Cheng Cheng <ch...@purdue.edu> wrote:
> Hi All,
>
> With hadoop-2.2.0, I tried to mount hdfs by using fuse-dfs. I have successfully compiled hadoop-2.2.0.tar.gz and fuse-dfs with "mvn package -Pdist,native -DskipTests -Dtar -e -X” from hadoop-2.2.0-src.
>
> After I deployed hadoop-2.2.0.tar.gz and run single mode HDFS, I tried to run following command to mount hdfs:
>
> ------------------------------------------------
> LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar bin/fuse_dfs -d dfs://localhost:8020 /mnt/hdfs
> ------------------------------------------------
>
> However, I got failed with error message:
>
> ------------------------------------------------
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:115 Ignoring option -d
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs
> FUSE library version: 2.8.3
> nullpath_ok: 0
> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
> INIT: 7.13
> flags=0x0000b07b
> max_readahead=0x00020000
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:98 Mounting with options: [ protected=(NULL), nn_uri=hdfs://nebula-vm14.cs.purdue.edu:8020, nn_port=0, debug=0, read_only=0, initchecks=0, no_permissions=0, usetrash=0, entry_timeout=60, attribute_timeout=60, rdbuffer_size=10485760, direct_io=0 ]
> loadFileSystems error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
> hdfsConfGetInt(hadoop.fuse.timer.period): new Configuration error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
> Unable to determine the configured value for hadoop.fuse.timer.period.ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:134 FATAL: dfs_init: fuseConnectInit failed with error -22!
> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:34 LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server
> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:35 CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
> ------------------------------------------------
>
> It looks like the configuration is not correctly loaded. But I have already configured  “hadoop.fuse.timer.period" and “hadoop.fuse.connection.timeout" in $HADOOP_HOME/etc/hadoop/hdfs-site.xml.
>
> Can any one share some hints on how to fix this? How can I let fuse-dfs correctly load the configuration?
>
> Thanks in advance!
> Cheng



-- 
Harsh J

Re: fuse-dfs on hadoop-2.2.0

Posted by Harsh J <ha...@cloudera.com>.
The issue here is that JNI doesn't like wildcards in the classpath
string - it does not evaluate them the same way the regular runtime
does. Try placing a full list of explicit jars on the classpath and it
will not throw that Class-Not-Found error anymore.

On Thu, May 15, 2014 at 10:54 AM, Cheng Cheng <ch...@purdue.edu> wrote:
> Hi All,
>
> With hadoop-2.2.0, I tried to mount hdfs by using fuse-dfs. I have successfully compiled hadoop-2.2.0.tar.gz and fuse-dfs with "mvn package -Pdist,native -DskipTests -Dtar -e -X” from hadoop-2.2.0-src.
>
> After I deployed hadoop-2.2.0.tar.gz and run single mode HDFS, I tried to run following command to mount hdfs:
>
> ------------------------------------------------
> LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar bin/fuse_dfs -d dfs://localhost:8020 /mnt/hdfs
> ------------------------------------------------
>
> However, I got failed with error message:
>
> ------------------------------------------------
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:115 Ignoring option -d
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c:164 Adding FUSE arg /mnt/hdfs
> FUSE library version: 2.8.3
> nullpath_ok: 0
> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
> INIT: 7.13
> flags=0x0000b07b
> max_readahead=0x00020000
> INFO /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:98 Mounting with options: [ protected=(NULL), nn_uri=hdfs://nebula-vm14.cs.purdue.edu:8020, nn_port=0, debug=0, read_only=0, initchecks=0, no_permissions=0, usetrash=0, entry_timeout=60, attribute_timeout=60, rdbuffer_size=10485760, direct_io=0 ]
> loadFileSystems error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
> hdfsConfGetInt(hadoop.fuse.timer.period): new Configuration error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
> Unable to determine the configured value for hadoop.fuse.timer.period.ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:134 FATAL: dfs_init: fuseConnectInit failed with error -22!
> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:34 LD_LIBRARY_PATH=/opt/hadoop-2.2.0/lib/native:/usr/java/jdk1.7.0_55/jre/lib/amd64/server
> ERROR /root/hadoop-2.2.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_init.c:35 CLASSPATH=/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
> ------------------------------------------------
>
> It looks like the configuration is not correctly loaded. But I have already configured  “hadoop.fuse.timer.period" and “hadoop.fuse.connection.timeout" in $HADOOP_HOME/etc/hadoop/hdfs-site.xml.
>
> Can any one share some hints on how to fix this? How can I let fuse-dfs correctly load the configuration?
>
> Thanks in advance!
> Cheng



-- 
Harsh J