You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Marília Melo <ma...@gmail.com> on 2013/06/26 21:36:18 UTC

java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Hi all,

I'm trying to install a plugin called gfarm_hadoop that allows me to use a
filesystem called gfarm instead of HDFS (
https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).

I have used it before, but now I'm trying to install it in a new cluster
and for some reason it isn't working...

After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2 at
/data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
new filesystem it works fine:

$ bin/hadoop fs -ls gfarm:///
Found 26 items
-rwxrwxrwx   1        101 2013-06-26 02:36 /foo
drwxrwxrwx   -          0 2013-06-26 02:43 /home

But then when I try to run an example, the task eventually completes, but I
get " Unable to load libGfarmFSNative library" errors. Looking at the logs
message it seems to be a path problem, but I have tried almost everything
and it doesn't work.

The way I'm setting the path now is writing on conf/hadoop-env.sh the
following line:

export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib

I have even moved all the .so files to the hadoop directory, but I still
get the same message...


Any ideas?

Thanks in advance.


Log:

$ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
Generating 1000 using 2 maps with step of 500
13/06/27 03:57:32 INFO mapred.JobClient: Running job: job_201306270356_0001
13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
attempt_201306270356_0001_m_000001_0, Status : FAILED
java.lang.Throwable: Child Error
       at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
       at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

java.lang.Throwable: Child Error
       at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
       at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
/data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
libgfarm.so.1: cannot open shared object file: No such file or directory
attempt_201306270356_0001_m_000001_0:   at
java.lang.ClassLoader$NativeLibrary.load(Native Method)
attempt_201306270356_0001_m_000001_0:   at
java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
attempt_201306270356_0001_m_000001_0:   at
java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
attempt_201306270356_0001_m_000001_0:   at
java.lang.Runtime.loadLibrary0(Runtime.java:823)
attempt_201306270356_0001_m_000001_0:   at
java.lang.System.loadLibrary(System.java:1028)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.mapred.Task.initialize(Task.java:522)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.mapred.Child$4.run(Child.java:255)
attempt_201306270356_0001_m_000001_0:   at
java.security.AccessController.doPrivileged(Native Method)
attempt_201306270356_0001_m_000001_0:   at
javax.security.auth.Subject.doAs(Subject.java:396)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
attempt_201306270356_0001_m_000001_0:   at
org.apache.hadoop.mapred.Child.main(Child.java:249)
attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
library
13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
13/06/27 03:57:47 INFO mapred.JobClient: Job complete: job_201306270356_0001
13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
snapshot=207736832
13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
(bytes)=401997824
13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
snapshot=1104424960
13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164

--
Marilia Melo

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Marília Melo <ma...@gmail.com>.
Thanks all for the comments.

You were right, I was testing on only one node, but there was another node
running hadoop that didn't have the libgfarm.* on its path. Even though my
conf/slaves did not include that node originally, it was still trying to
execute tasks.

Take away: always read the logs!

Thanks a lot!

--
Marilia Melo


On Thu, Jun 27, 2013 at 11:55 AM, Azuryy Yu <az...@gmail.com> wrote:

> From the log:  libGfarmFSNative.so: libgfarm.so.1: cannot open shared
> object file: No such file or directory
>
> I don't think you put libgfarm.* under
> $HADOOP_HOME/lib/native/Linux-amd64-64 (Linux-i386-32 if running on 32 bits
> OS) on all nodes.
>
>
>
> On Thu, Jun 27, 2013 at 10:44 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> Is "libgfarm.so.1" installed and available on all systems? You're facing
>> a link error though hadoop did try to load the library it had (
>> libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
>> probably the best place to ask.
>>
>>
>> On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com>wrote:
>>
>>> Hi all,
>>>
>>> I'm trying to install a plugin called gfarm_hadoop that allows me to use
>>> a filesystem called gfarm instead of HDFS (
>>> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>>>
>>> I have used it before, but now I'm trying to install it in a new cluster
>>> and for some reason it isn't working...
>>>
>>> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
>>> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
>>> new filesystem it works fine:
>>>
>>> $ bin/hadoop fs -ls gfarm:///
>>> Found 26 items
>>> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
>>> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>>>
>>> But then when I try to run an example, the task eventually completes,
>>> but I get " Unable to load libGfarmFSNative library" errors. Looking at the
>>> logs message it seems to be a path problem, but I have tried almost
>>> everything and it doesn't work.
>>>
>>> The way I'm setting the path now is writing on conf/hadoop-env.sh the
>>> following line:
>>>
>>> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>>>
>>> I have even moved all the .so files to the hadoop directory, but I still
>>> get the same message...
>>>
>>>
>>> Any ideas?
>>>
>>> Thanks in advance.
>>>
>>>
>>> Log:
>>>
>>> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
>>> Generating 1000 using 2 maps with step of 500
>>> 13/06/27 03:57:32 INFO mapred.JobClient: Running job:
>>> job_201306270356_0001
>>> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
>>> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
>>> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
>>> attempt_201306270356_0001_m_000001_0, Status : FAILED
>>> java.lang.Throwable: Child Error
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of
>>> 1.
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>
>>> java.lang.Throwable: Child Error
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of
>>> 1.
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>
>>> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
>>> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
>>> libgfarm.so.1: cannot open shared object file: No such file or directory
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader$NativeLibrary.load(Native Method)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>>>  attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.Runtime.loadLibrary0(Runtime.java:823)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.System.loadLibrary(System.java:1028)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.security.AccessController.doPrivileged(Native Method)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> javax.security.auth.Subject.doAs(Subject.java:396)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Child.main(Child.java:249)
>>> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
>>> library
>>> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
>>> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
>>> job_201306270356_0001
>>> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>>> reduces waiting after reserving slots (ms)=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>>> maps waiting after reserving slots (ms)=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
>>> snapshot=207736832
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
>>> (bytes)=401997824
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
>>> snapshot=1104424960
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>>>
>>> --
>>> Marilia Melo
>>>
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Marília Melo <ma...@gmail.com>.
Thanks all for the comments.

You were right, I was testing on only one node, but there was another node
running hadoop that didn't have the libgfarm.* on its path. Even though my
conf/slaves did not include that node originally, it was still trying to
execute tasks.

Take away: always read the logs!

Thanks a lot!

--
Marilia Melo


On Thu, Jun 27, 2013 at 11:55 AM, Azuryy Yu <az...@gmail.com> wrote:

> From the log:  libGfarmFSNative.so: libgfarm.so.1: cannot open shared
> object file: No such file or directory
>
> I don't think you put libgfarm.* under
> $HADOOP_HOME/lib/native/Linux-amd64-64 (Linux-i386-32 if running on 32 bits
> OS) on all nodes.
>
>
>
> On Thu, Jun 27, 2013 at 10:44 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> Is "libgfarm.so.1" installed and available on all systems? You're facing
>> a link error though hadoop did try to load the library it had (
>> libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
>> probably the best place to ask.
>>
>>
>> On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com>wrote:
>>
>>> Hi all,
>>>
>>> I'm trying to install a plugin called gfarm_hadoop that allows me to use
>>> a filesystem called gfarm instead of HDFS (
>>> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>>>
>>> I have used it before, but now I'm trying to install it in a new cluster
>>> and for some reason it isn't working...
>>>
>>> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
>>> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
>>> new filesystem it works fine:
>>>
>>> $ bin/hadoop fs -ls gfarm:///
>>> Found 26 items
>>> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
>>> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>>>
>>> But then when I try to run an example, the task eventually completes,
>>> but I get " Unable to load libGfarmFSNative library" errors. Looking at the
>>> logs message it seems to be a path problem, but I have tried almost
>>> everything and it doesn't work.
>>>
>>> The way I'm setting the path now is writing on conf/hadoop-env.sh the
>>> following line:
>>>
>>> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>>>
>>> I have even moved all the .so files to the hadoop directory, but I still
>>> get the same message...
>>>
>>>
>>> Any ideas?
>>>
>>> Thanks in advance.
>>>
>>>
>>> Log:
>>>
>>> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
>>> Generating 1000 using 2 maps with step of 500
>>> 13/06/27 03:57:32 INFO mapred.JobClient: Running job:
>>> job_201306270356_0001
>>> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
>>> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
>>> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
>>> attempt_201306270356_0001_m_000001_0, Status : FAILED
>>> java.lang.Throwable: Child Error
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of
>>> 1.
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>
>>> java.lang.Throwable: Child Error
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of
>>> 1.
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>
>>> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
>>> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
>>> libgfarm.so.1: cannot open shared object file: No such file or directory
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader$NativeLibrary.load(Native Method)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>>>  attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.Runtime.loadLibrary0(Runtime.java:823)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.System.loadLibrary(System.java:1028)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.security.AccessController.doPrivileged(Native Method)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> javax.security.auth.Subject.doAs(Subject.java:396)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Child.main(Child.java:249)
>>> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
>>> library
>>> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
>>> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
>>> job_201306270356_0001
>>> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>>> reduces waiting after reserving slots (ms)=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>>> maps waiting after reserving slots (ms)=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
>>> snapshot=207736832
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
>>> (bytes)=401997824
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
>>> snapshot=1104424960
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>>>
>>> --
>>> Marilia Melo
>>>
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Marília Melo <ma...@gmail.com>.
Thanks all for the comments.

You were right, I was testing on only one node, but there was another node
running hadoop that didn't have the libgfarm.* on its path. Even though my
conf/slaves did not include that node originally, it was still trying to
execute tasks.

Take away: always read the logs!

Thanks a lot!

--
Marilia Melo


On Thu, Jun 27, 2013 at 11:55 AM, Azuryy Yu <az...@gmail.com> wrote:

> From the log:  libGfarmFSNative.so: libgfarm.so.1: cannot open shared
> object file: No such file or directory
>
> I don't think you put libgfarm.* under
> $HADOOP_HOME/lib/native/Linux-amd64-64 (Linux-i386-32 if running on 32 bits
> OS) on all nodes.
>
>
>
> On Thu, Jun 27, 2013 at 10:44 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> Is "libgfarm.so.1" installed and available on all systems? You're facing
>> a link error though hadoop did try to load the library it had (
>> libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
>> probably the best place to ask.
>>
>>
>> On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com>wrote:
>>
>>> Hi all,
>>>
>>> I'm trying to install a plugin called gfarm_hadoop that allows me to use
>>> a filesystem called gfarm instead of HDFS (
>>> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>>>
>>> I have used it before, but now I'm trying to install it in a new cluster
>>> and for some reason it isn't working...
>>>
>>> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
>>> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
>>> new filesystem it works fine:
>>>
>>> $ bin/hadoop fs -ls gfarm:///
>>> Found 26 items
>>> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
>>> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>>>
>>> But then when I try to run an example, the task eventually completes,
>>> but I get " Unable to load libGfarmFSNative library" errors. Looking at the
>>> logs message it seems to be a path problem, but I have tried almost
>>> everything and it doesn't work.
>>>
>>> The way I'm setting the path now is writing on conf/hadoop-env.sh the
>>> following line:
>>>
>>> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>>>
>>> I have even moved all the .so files to the hadoop directory, but I still
>>> get the same message...
>>>
>>>
>>> Any ideas?
>>>
>>> Thanks in advance.
>>>
>>>
>>> Log:
>>>
>>> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
>>> Generating 1000 using 2 maps with step of 500
>>> 13/06/27 03:57:32 INFO mapred.JobClient: Running job:
>>> job_201306270356_0001
>>> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
>>> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
>>> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
>>> attempt_201306270356_0001_m_000001_0, Status : FAILED
>>> java.lang.Throwable: Child Error
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of
>>> 1.
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>
>>> java.lang.Throwable: Child Error
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of
>>> 1.
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>
>>> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
>>> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
>>> libgfarm.so.1: cannot open shared object file: No such file or directory
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader$NativeLibrary.load(Native Method)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>>>  attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.Runtime.loadLibrary0(Runtime.java:823)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.System.loadLibrary(System.java:1028)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.security.AccessController.doPrivileged(Native Method)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> javax.security.auth.Subject.doAs(Subject.java:396)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Child.main(Child.java:249)
>>> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
>>> library
>>> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
>>> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
>>> job_201306270356_0001
>>> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>>> reduces waiting after reserving slots (ms)=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>>> maps waiting after reserving slots (ms)=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
>>> snapshot=207736832
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
>>> (bytes)=401997824
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
>>> snapshot=1104424960
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>>>
>>> --
>>> Marilia Melo
>>>
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Marília Melo <ma...@gmail.com>.
Thanks all for the comments.

You were right, I was testing on only one node, but there was another node
running hadoop that didn't have the libgfarm.* on its path. Even though my
conf/slaves did not include that node originally, it was still trying to
execute tasks.

Take away: always read the logs!

Thanks a lot!

--
Marilia Melo


On Thu, Jun 27, 2013 at 11:55 AM, Azuryy Yu <az...@gmail.com> wrote:

> From the log:  libGfarmFSNative.so: libgfarm.so.1: cannot open shared
> object file: No such file or directory
>
> I don't think you put libgfarm.* under
> $HADOOP_HOME/lib/native/Linux-amd64-64 (Linux-i386-32 if running on 32 bits
> OS) on all nodes.
>
>
>
> On Thu, Jun 27, 2013 at 10:44 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> Is "libgfarm.so.1" installed and available on all systems? You're facing
>> a link error though hadoop did try to load the library it had (
>> libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
>> probably the best place to ask.
>>
>>
>> On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com>wrote:
>>
>>> Hi all,
>>>
>>> I'm trying to install a plugin called gfarm_hadoop that allows me to use
>>> a filesystem called gfarm instead of HDFS (
>>> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>>>
>>> I have used it before, but now I'm trying to install it in a new cluster
>>> and for some reason it isn't working...
>>>
>>> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
>>> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
>>> new filesystem it works fine:
>>>
>>> $ bin/hadoop fs -ls gfarm:///
>>> Found 26 items
>>> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
>>> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>>>
>>> But then when I try to run an example, the task eventually completes,
>>> but I get " Unable to load libGfarmFSNative library" errors. Looking at the
>>> logs message it seems to be a path problem, but I have tried almost
>>> everything and it doesn't work.
>>>
>>> The way I'm setting the path now is writing on conf/hadoop-env.sh the
>>> following line:
>>>
>>> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>>>
>>> I have even moved all the .so files to the hadoop directory, but I still
>>> get the same message...
>>>
>>>
>>> Any ideas?
>>>
>>> Thanks in advance.
>>>
>>>
>>> Log:
>>>
>>> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
>>> Generating 1000 using 2 maps with step of 500
>>> 13/06/27 03:57:32 INFO mapred.JobClient: Running job:
>>> job_201306270356_0001
>>> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
>>> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
>>> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
>>> attempt_201306270356_0001_m_000001_0, Status : FAILED
>>> java.lang.Throwable: Child Error
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of
>>> 1.
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>
>>> java.lang.Throwable: Child Error
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of
>>> 1.
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>
>>> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
>>> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
>>> libgfarm.so.1: cannot open shared object file: No such file or directory
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader$NativeLibrary.load(Native Method)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>>>  attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.Runtime.loadLibrary0(Runtime.java:823)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.lang.System.loadLibrary(System.java:1028)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> java.security.AccessController.doPrivileged(Native Method)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> javax.security.auth.Subject.doAs(Subject.java:396)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>>> attempt_201306270356_0001_m_000001_0:   at
>>> org.apache.hadoop.mapred.Child.main(Child.java:249)
>>> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
>>> library
>>> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
>>> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
>>> job_201306270356_0001
>>> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>>> reduces waiting after reserving slots (ms)=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>>> maps waiting after reserving slots (ms)=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
>>> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
>>> snapshot=207736832
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
>>> (bytes)=401997824
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
>>> snapshot=1104424960
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
>>> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>>>
>>> --
>>> Marilia Melo
>>>
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Azuryy Yu <az...@gmail.com>.
>From the log:  libGfarmFSNative.so: libgfarm.so.1: cannot open shared
object file: No such file or directory

I don't think you put libgfarm.* under
$HADOOP_HOME/lib/native/Linux-amd64-64 (Linux-i386-32 if running on 32 bits
OS) on all nodes.



On Thu, Jun 27, 2013 at 10:44 AM, Harsh J <ha...@cloudera.com> wrote:

> Is "libgfarm.so.1" installed and available on all systems? You're facing
> a link error though hadoop did try to load the library it had (
> libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
> probably the best place to ask.
>
>
> On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com>wrote:
>
>> Hi all,
>>
>> I'm trying to install a plugin called gfarm_hadoop that allows me to use
>> a filesystem called gfarm instead of HDFS (
>> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>>
>> I have used it before, but now I'm trying to install it in a new cluster
>> and for some reason it isn't working...
>>
>> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
>> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
>> new filesystem it works fine:
>>
>> $ bin/hadoop fs -ls gfarm:///
>> Found 26 items
>> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
>> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>>
>> But then when I try to run an example, the task eventually completes, but
>> I get " Unable to load libGfarmFSNative library" errors. Looking at the
>> logs message it seems to be a path problem, but I have tried almost
>> everything and it doesn't work.
>>
>> The way I'm setting the path now is writing on conf/hadoop-env.sh the
>> following line:
>>
>> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>>
>> I have even moved all the .so files to the hadoop directory, but I still
>> get the same message...
>>
>>
>> Any ideas?
>>
>> Thanks in advance.
>>
>>
>> Log:
>>
>> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
>> Generating 1000 using 2 maps with step of 500
>> 13/06/27 03:57:32 INFO mapred.JobClient: Running job:
>> job_201306270356_0001
>> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
>> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
>> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
>> attempt_201306270356_0001_m_000001_0, Status : FAILED
>> java.lang.Throwable: Child Error
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of
>> 1.
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>
>> java.lang.Throwable: Child Error
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of
>> 1.
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>
>> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
>> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
>> libgfarm.so.1: cannot open shared object file: No such file or directory
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader$NativeLibrary.load(Native Method)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>>  attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.Runtime.loadLibrary0(Runtime.java:823)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.System.loadLibrary(System.java:1028)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.security.AccessController.doPrivileged(Native Method)
>> attempt_201306270356_0001_m_000001_0:   at
>> javax.security.auth.Subject.doAs(Subject.java:396)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Child.main(Child.java:249)
>> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
>> library
>> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
>> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
>> job_201306270356_0001
>> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
>> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>> reduces waiting after reserving slots (ms)=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all maps
>> waiting after reserving slots (ms)=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
>> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
>> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
>> snapshot=207736832
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
>> (bytes)=401997824
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
>> snapshot=1104424960
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>>
>> --
>> Marilia Melo
>>
>
>
>
> --
> Harsh J
>

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Azuryy Yu <az...@gmail.com>.
>From the log:  libGfarmFSNative.so: libgfarm.so.1: cannot open shared
object file: No such file or directory

I don't think you put libgfarm.* under
$HADOOP_HOME/lib/native/Linux-amd64-64 (Linux-i386-32 if running on 32 bits
OS) on all nodes.



On Thu, Jun 27, 2013 at 10:44 AM, Harsh J <ha...@cloudera.com> wrote:

> Is "libgfarm.so.1" installed and available on all systems? You're facing
> a link error though hadoop did try to load the library it had (
> libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
> probably the best place to ask.
>
>
> On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com>wrote:
>
>> Hi all,
>>
>> I'm trying to install a plugin called gfarm_hadoop that allows me to use
>> a filesystem called gfarm instead of HDFS (
>> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>>
>> I have used it before, but now I'm trying to install it in a new cluster
>> and for some reason it isn't working...
>>
>> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
>> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
>> new filesystem it works fine:
>>
>> $ bin/hadoop fs -ls gfarm:///
>> Found 26 items
>> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
>> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>>
>> But then when I try to run an example, the task eventually completes, but
>> I get " Unable to load libGfarmFSNative library" errors. Looking at the
>> logs message it seems to be a path problem, but I have tried almost
>> everything and it doesn't work.
>>
>> The way I'm setting the path now is writing on conf/hadoop-env.sh the
>> following line:
>>
>> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>>
>> I have even moved all the .so files to the hadoop directory, but I still
>> get the same message...
>>
>>
>> Any ideas?
>>
>> Thanks in advance.
>>
>>
>> Log:
>>
>> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
>> Generating 1000 using 2 maps with step of 500
>> 13/06/27 03:57:32 INFO mapred.JobClient: Running job:
>> job_201306270356_0001
>> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
>> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
>> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
>> attempt_201306270356_0001_m_000001_0, Status : FAILED
>> java.lang.Throwable: Child Error
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of
>> 1.
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>
>> java.lang.Throwable: Child Error
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of
>> 1.
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>
>> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
>> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
>> libgfarm.so.1: cannot open shared object file: No such file or directory
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader$NativeLibrary.load(Native Method)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>>  attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.Runtime.loadLibrary0(Runtime.java:823)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.System.loadLibrary(System.java:1028)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.security.AccessController.doPrivileged(Native Method)
>> attempt_201306270356_0001_m_000001_0:   at
>> javax.security.auth.Subject.doAs(Subject.java:396)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Child.main(Child.java:249)
>> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
>> library
>> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
>> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
>> job_201306270356_0001
>> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
>> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>> reduces waiting after reserving slots (ms)=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all maps
>> waiting after reserving slots (ms)=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
>> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
>> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
>> snapshot=207736832
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
>> (bytes)=401997824
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
>> snapshot=1104424960
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>>
>> --
>> Marilia Melo
>>
>
>
>
> --
> Harsh J
>

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Azuryy Yu <az...@gmail.com>.
>From the log:  libGfarmFSNative.so: libgfarm.so.1: cannot open shared
object file: No such file or directory

I don't think you put libgfarm.* under
$HADOOP_HOME/lib/native/Linux-amd64-64 (Linux-i386-32 if running on 32 bits
OS) on all nodes.



On Thu, Jun 27, 2013 at 10:44 AM, Harsh J <ha...@cloudera.com> wrote:

> Is "libgfarm.so.1" installed and available on all systems? You're facing
> a link error though hadoop did try to load the library it had (
> libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
> probably the best place to ask.
>
>
> On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com>wrote:
>
>> Hi all,
>>
>> I'm trying to install a plugin called gfarm_hadoop that allows me to use
>> a filesystem called gfarm instead of HDFS (
>> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>>
>> I have used it before, but now I'm trying to install it in a new cluster
>> and for some reason it isn't working...
>>
>> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
>> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
>> new filesystem it works fine:
>>
>> $ bin/hadoop fs -ls gfarm:///
>> Found 26 items
>> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
>> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>>
>> But then when I try to run an example, the task eventually completes, but
>> I get " Unable to load libGfarmFSNative library" errors. Looking at the
>> logs message it seems to be a path problem, but I have tried almost
>> everything and it doesn't work.
>>
>> The way I'm setting the path now is writing on conf/hadoop-env.sh the
>> following line:
>>
>> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>>
>> I have even moved all the .so files to the hadoop directory, but I still
>> get the same message...
>>
>>
>> Any ideas?
>>
>> Thanks in advance.
>>
>>
>> Log:
>>
>> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
>> Generating 1000 using 2 maps with step of 500
>> 13/06/27 03:57:32 INFO mapred.JobClient: Running job:
>> job_201306270356_0001
>> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
>> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
>> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
>> attempt_201306270356_0001_m_000001_0, Status : FAILED
>> java.lang.Throwable: Child Error
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of
>> 1.
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>
>> java.lang.Throwable: Child Error
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of
>> 1.
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>
>> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
>> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
>> libgfarm.so.1: cannot open shared object file: No such file or directory
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader$NativeLibrary.load(Native Method)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>>  attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.Runtime.loadLibrary0(Runtime.java:823)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.System.loadLibrary(System.java:1028)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.security.AccessController.doPrivileged(Native Method)
>> attempt_201306270356_0001_m_000001_0:   at
>> javax.security.auth.Subject.doAs(Subject.java:396)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Child.main(Child.java:249)
>> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
>> library
>> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
>> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
>> job_201306270356_0001
>> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
>> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>> reduces waiting after reserving slots (ms)=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all maps
>> waiting after reserving slots (ms)=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
>> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
>> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
>> snapshot=207736832
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
>> (bytes)=401997824
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
>> snapshot=1104424960
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>>
>> --
>> Marilia Melo
>>
>
>
>
> --
> Harsh J
>

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Azuryy Yu <az...@gmail.com>.
>From the log:  libGfarmFSNative.so: libgfarm.so.1: cannot open shared
object file: No such file or directory

I don't think you put libgfarm.* under
$HADOOP_HOME/lib/native/Linux-amd64-64 (Linux-i386-32 if running on 32 bits
OS) on all nodes.



On Thu, Jun 27, 2013 at 10:44 AM, Harsh J <ha...@cloudera.com> wrote:

> Is "libgfarm.so.1" installed and available on all systems? You're facing
> a link error though hadoop did try to load the library it had (
> libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
> probably the best place to ask.
>
>
> On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com>wrote:
>
>> Hi all,
>>
>> I'm trying to install a plugin called gfarm_hadoop that allows me to use
>> a filesystem called gfarm instead of HDFS (
>> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>>
>> I have used it before, but now I'm trying to install it in a new cluster
>> and for some reason it isn't working...
>>
>> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
>> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
>> new filesystem it works fine:
>>
>> $ bin/hadoop fs -ls gfarm:///
>> Found 26 items
>> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
>> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>>
>> But then when I try to run an example, the task eventually completes, but
>> I get " Unable to load libGfarmFSNative library" errors. Looking at the
>> logs message it seems to be a path problem, but I have tried almost
>> everything and it doesn't work.
>>
>> The way I'm setting the path now is writing on conf/hadoop-env.sh the
>> following line:
>>
>> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>>
>> I have even moved all the .so files to the hadoop directory, but I still
>> get the same message...
>>
>>
>> Any ideas?
>>
>> Thanks in advance.
>>
>>
>> Log:
>>
>> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
>> Generating 1000 using 2 maps with step of 500
>> 13/06/27 03:57:32 INFO mapred.JobClient: Running job:
>> job_201306270356_0001
>> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
>> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
>> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
>> attempt_201306270356_0001_m_000001_0, Status : FAILED
>> java.lang.Throwable: Child Error
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of
>> 1.
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>
>> java.lang.Throwable: Child Error
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of
>> 1.
>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>
>> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
>> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
>> libgfarm.so.1: cannot open shared object file: No such file or directory
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader$NativeLibrary.load(Native Method)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>>  attempt_201306270356_0001_m_000001_0:   at
>> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.Runtime.loadLibrary0(Runtime.java:823)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.lang.System.loadLibrary(System.java:1028)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>> attempt_201306270356_0001_m_000001_0:   at
>> java.security.AccessController.doPrivileged(Native Method)
>> attempt_201306270356_0001_m_000001_0:   at
>> javax.security.auth.Subject.doAs(Subject.java:396)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>> attempt_201306270356_0001_m_000001_0:   at
>> org.apache.hadoop.mapred.Child.main(Child.java:249)
>> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
>> library
>> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
>> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
>> job_201306270356_0001
>> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
>> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
>> reduces waiting after reserving slots (ms)=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all maps
>> waiting after reserving slots (ms)=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
>> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
>> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
>> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
>> snapshot=207736832
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
>> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
>> (bytes)=401997824
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
>> snapshot=1104424960
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
>> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>>
>> --
>> Marilia Melo
>>
>
>
>
> --
> Harsh J
>

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Harsh J <ha...@cloudera.com>.
Is "libgfarm.so.1" installed and available on all systems? You're facing a
link error though hadoop did try to load the library it had (
libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
probably the best place to ask.


On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com> wrote:

> Hi all,
>
> I'm trying to install a plugin called gfarm_hadoop that allows me to use a
> filesystem called gfarm instead of HDFS (
> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>
> I have used it before, but now I'm trying to install it in a new cluster
> and for some reason it isn't working...
>
> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
> new filesystem it works fine:
>
> $ bin/hadoop fs -ls gfarm:///
> Found 26 items
> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>
> But then when I try to run an example, the task eventually completes, but
> I get " Unable to load libGfarmFSNative library" errors. Looking at the
> logs message it seems to be a path problem, but I have tried almost
> everything and it doesn't work.
>
> The way I'm setting the path now is writing on conf/hadoop-env.sh the
> following line:
>
> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>
> I have even moved all the .so files to the hadoop directory, but I still
> get the same message...
>
>
> Any ideas?
>
> Thanks in advance.
>
>
> Log:
>
> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
> Generating 1000 using 2 maps with step of 500
> 13/06/27 03:57:32 INFO mapred.JobClient: Running job: job_201306270356_0001
> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
> attempt_201306270356_0001_m_000001_0, Status : FAILED
> java.lang.Throwable: Child Error
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
> java.lang.Throwable: Child Error
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
> libgfarm.so.1: cannot open shared object file: No such file or directory
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader$NativeLibrary.load(Native Method)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>  attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.Runtime.loadLibrary0(Runtime.java:823)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.System.loadLibrary(System.java:1028)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> attempt_201306270356_0001_m_000001_0:   at
> java.security.AccessController.doPrivileged(Native Method)
> attempt_201306270356_0001_m_000001_0:   at
> javax.security.auth.Subject.doAs(Subject.java:396)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Child.main(Child.java:249)
> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
> library
> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
> job_201306270356_0001
> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=207736832
> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=401997824
> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=1104424960
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>
> --
> Marilia Melo
>



-- 
Harsh J

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Harsh J <ha...@cloudera.com>.
Is "libgfarm.so.1" installed and available on all systems? You're facing a
link error though hadoop did try to load the library it had (
libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
probably the best place to ask.


On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com> wrote:

> Hi all,
>
> I'm trying to install a plugin called gfarm_hadoop that allows me to use a
> filesystem called gfarm instead of HDFS (
> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>
> I have used it before, but now I'm trying to install it in a new cluster
> and for some reason it isn't working...
>
> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
> new filesystem it works fine:
>
> $ bin/hadoop fs -ls gfarm:///
> Found 26 items
> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>
> But then when I try to run an example, the task eventually completes, but
> I get " Unable to load libGfarmFSNative library" errors. Looking at the
> logs message it seems to be a path problem, but I have tried almost
> everything and it doesn't work.
>
> The way I'm setting the path now is writing on conf/hadoop-env.sh the
> following line:
>
> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>
> I have even moved all the .so files to the hadoop directory, but I still
> get the same message...
>
>
> Any ideas?
>
> Thanks in advance.
>
>
> Log:
>
> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
> Generating 1000 using 2 maps with step of 500
> 13/06/27 03:57:32 INFO mapred.JobClient: Running job: job_201306270356_0001
> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
> attempt_201306270356_0001_m_000001_0, Status : FAILED
> java.lang.Throwable: Child Error
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
> java.lang.Throwable: Child Error
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
> libgfarm.so.1: cannot open shared object file: No such file or directory
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader$NativeLibrary.load(Native Method)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>  attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.Runtime.loadLibrary0(Runtime.java:823)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.System.loadLibrary(System.java:1028)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> attempt_201306270356_0001_m_000001_0:   at
> java.security.AccessController.doPrivileged(Native Method)
> attempt_201306270356_0001_m_000001_0:   at
> javax.security.auth.Subject.doAs(Subject.java:396)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Child.main(Child.java:249)
> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
> library
> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
> job_201306270356_0001
> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=207736832
> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=401997824
> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=1104424960
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>
> --
> Marilia Melo
>



-- 
Harsh J

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Harsh J <ha...@cloudera.com>.
Is "libgfarm.so.1" installed and available on all systems? You're facing a
link error though hadoop did try to load the library it had (
libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
probably the best place to ask.


On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com> wrote:

> Hi all,
>
> I'm trying to install a plugin called gfarm_hadoop that allows me to use a
> filesystem called gfarm instead of HDFS (
> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>
> I have used it before, but now I'm trying to install it in a new cluster
> and for some reason it isn't working...
>
> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
> new filesystem it works fine:
>
> $ bin/hadoop fs -ls gfarm:///
> Found 26 items
> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>
> But then when I try to run an example, the task eventually completes, but
> I get " Unable to load libGfarmFSNative library" errors. Looking at the
> logs message it seems to be a path problem, but I have tried almost
> everything and it doesn't work.
>
> The way I'm setting the path now is writing on conf/hadoop-env.sh the
> following line:
>
> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>
> I have even moved all the .so files to the hadoop directory, but I still
> get the same message...
>
>
> Any ideas?
>
> Thanks in advance.
>
>
> Log:
>
> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
> Generating 1000 using 2 maps with step of 500
> 13/06/27 03:57:32 INFO mapred.JobClient: Running job: job_201306270356_0001
> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
> attempt_201306270356_0001_m_000001_0, Status : FAILED
> java.lang.Throwable: Child Error
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
> java.lang.Throwable: Child Error
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
> libgfarm.so.1: cannot open shared object file: No such file or directory
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader$NativeLibrary.load(Native Method)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>  attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.Runtime.loadLibrary0(Runtime.java:823)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.System.loadLibrary(System.java:1028)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> attempt_201306270356_0001_m_000001_0:   at
> java.security.AccessController.doPrivileged(Native Method)
> attempt_201306270356_0001_m_000001_0:   at
> javax.security.auth.Subject.doAs(Subject.java:396)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Child.main(Child.java:249)
> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
> library
> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
> job_201306270356_0001
> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=207736832
> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=401997824
> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=1104424960
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>
> --
> Marilia Melo
>



-- 
Harsh J

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

Posted by Harsh J <ha...@cloudera.com>.
Is "libgfarm.so.1" installed and available on all systems? You're facing a
link error though hadoop did try to load the library it had (
libGfarmFSNative.so). If the "gfarm" guys have a mailing list, thats
probably the best place to ask.


On Thu, Jun 27, 2013 at 1:06 AM, Marília Melo <ma...@gmail.com> wrote:

> Hi all,
>
> I'm trying to install a plugin called gfarm_hadoop that allows me to use a
> filesystem called gfarm instead of HDFS (
> https://sourceforge.net/projects/gfarm/files/gfarm_hadoop/).
>
> I have used it before, but now I'm trying to install it in a new cluster
> and for some reason it isn't working...
>
> After installing gfarm 2.5.8 at /data/local3/marilia/gfarm, hadoop 1.1.2
> at /data/local3/marilia/hadoop-1.1.2 and the plugin, when I try to list the
> new filesystem it works fine:
>
> $ bin/hadoop fs -ls gfarm:///
> Found 26 items
> -rwxrwxrwx   1        101 2013-06-26 02:36 /foo
> drwxrwxrwx   -          0 2013-06-26 02:43 /home
>
> But then when I try to run an example, the task eventually completes, but
> I get " Unable to load libGfarmFSNative library" errors. Looking at the
> logs message it seems to be a path problem, but I have tried almost
> everything and it doesn't work.
>
> The way I'm setting the path now is writing on conf/hadoop-env.sh the
> following line:
>
> export LD_LIBRARY_PATH=/data/local3/marilia/gfarm/lib
>
> I have even moved all the .so files to the hadoop directory, but I still
> get the same message...
>
>
> Any ideas?
>
> Thanks in advance.
>
>
> Log:
>
> $ bin/hadoop jar hadoop-examples-*.jar teragen 1000 gfarm:///inoa11
> Generating 1000 using 2 maps with step of 500
> 13/06/27 03:57:32 INFO mapred.JobClient: Running job: job_201306270356_0001
> 13/06/27 03:57:33 INFO mapred.JobClient:  map 0% reduce 0%
> 13/06/27 03:57:38 INFO mapred.JobClient:  map 50% reduce 0%
> 13/06/27 03:57:43 INFO mapred.JobClient: Task Id :
> attempt_201306270356_0001_m_000001_0, Status : FAILED
> java.lang.Throwable: Child Error
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
> java.lang.Throwable: Child Error
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
> attempt_201306270356_0001_m_000001_0: java.lang.UnsatisfiedLinkError:
> /data/local3/marilia/hadoop-1.1.2/lib/native/Linux-amd64-64/libGfarmFSNative.so:
> libgfarm.so.1: cannot open shared object file: No such file or directory
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader$NativeLibrary.load(Native Method)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
>  attempt_201306270356_0001_m_000001_0:   at
> java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.Runtime.loadLibrary0(Runtime.java:823)
> attempt_201306270356_0001_m_000001_0:   at
> java.lang.System.loadLibrary(System.java:1028)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.gfarmfs.GfarmFSNative.<clinit>(GfarmFSNative.java:9)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.gfarmfs.GfarmFileSystem.initialize(GfarmFileSystem.java:34)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.FileOutputCommitter.getTempTaskOutputPath(FileOutputCommitter.java:234)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Task.initialize(Task.java:522)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:353)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> attempt_201306270356_0001_m_000001_0:   at
> java.security.AccessController.doPrivileged(Native Method)
> attempt_201306270356_0001_m_000001_0:   at
> javax.security.auth.Subject.doAs(Subject.java:396)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
> attempt_201306270356_0001_m_000001_0:   at
> org.apache.hadoop.mapred.Child.main(Child.java:249)
> attempt_201306270356_0001_m_000001_0: Unable to load libGfarmFSNative
> library
> 13/06/27 03:57:46 INFO mapred.JobClient:  map 100% reduce 0%
> 13/06/27 03:57:47 INFO mapred.JobClient: Job complete:
> job_201306270356_0001
> 13/06/27 03:57:47 INFO mapred.JobClient: Counters: 18
> 13/06/27 03:57:47 INFO mapred.JobClient:   Job Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=10846
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     Launched map tasks=3
> 13/06/27 03:57:47 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   File Input Format Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Read=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   File Output Format Counters
> 13/06/27 03:57:47 INFO mapred.JobClient:     Bytes Written=0
> 13/06/27 03:57:47 INFO mapred.JobClient:   FileSystemCounters
> 13/06/27 03:57:47 INFO mapred.JobClient:     HDFS_BYTES_READ=164
> 13/06/27 03:57:47 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=104248
> 13/06/27 03:57:47 INFO mapred.JobClient:   Map-Reduce Framework
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input records=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=207736832
> 13/06/27 03:57:47 INFO mapred.JobClient:     Spilled Records=0
> 13/06/27 03:57:47 INFO mapred.JobClient:     CPU time spent (ms)=190
> 13/06/27 03:57:47 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=401997824
> 13/06/27 03:57:47 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=1104424960
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map input bytes=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     Map output records=1000
> 13/06/27 03:57:47 INFO mapred.JobClient:     SPLIT_RAW_BYTES=164
>
> --
> Marilia Melo
>



-- 
Harsh J