You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Dmytro Sen (JIRA)" <ji...@apache.org> on 2014/10/16 17:12:33 UTC

[jira] [Created] (AMBARI-7814) Flume agent on Ambari uses the default Java on machine

Dmytro Sen created AMBARI-7814:
----------------------------------

             Summary: Flume agent on Ambari uses the default Java on machine
                 Key: AMBARI-7814
                 URL: https://issues.apache.org/jira/browse/AMBARI-7814
             Project: Ambari
          Issue Type: Bug
          Components: stacks
    Affects Versions: 1.7.0
            Reporter: Dmytro Sen
            Assignee: Dmytro Sen
            Priority: Blocker
             Fix For: 1.7.0


When running Flume Agent on Ambari installed cluster, the command aborts with OutOfMemory, here is a sample run:
{noformat}
/usr/hdp/current/flume-client/bin/flume-ng agent -n agent -c /usr/hdp/current/flume-client/conf -f /grid/0/hadoopqe/tests/flume/conf/avro-memory-file_roll.properties -Dflume.root.logger=DEBUG,console
Warning: JAVA_HOME is not set!
Info: Including Hadoop libraries found via (/usr/bin/hadoop) for HDFS access
Info: Excluding /usr/hdp/2.2.0.0-917/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/hdp/2.2.0.0-917/hadoop/lib/slf4j-log4j12-1.7.5.jar from classpath
Info: Including HBASE libraries found via (/usr/bin/hbase) for HBASE access
Info: Excluding /usr/hdp/2.2.0.0-917/hbase/lib/slf4j-api-1.6.4.jar from classpath
Info: Excluding /usr/hdp/2.2.0.0-917/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/hdp/2.2.0.0-917/hadoop/lib/slf4j-log4j12-1.7.5.jar from classpath
Info: Excluding /usr/hdp/2.2.0.0-917/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/hdp/2.2.0.0-917/hadoop/lib/slf4j-log4j12-1.7.5.jar from classpath
Info: Excluding /usr/hdp/2.2.0.0-917/zookeeper/lib/slf4j-api-1.6.1.jar from classpath
Info: Excluding /usr/hdp/2.2.0.0-917/zookeeper/lib/slf4j-log4j12-1.6.1.jar from classpath
Info: Including Hive libraries found via () for Hive access
+ exec /usr/bin/java -Xmx20m -Dflume.root.logger=DEBUG,console -cp '...' -n agent -f /grid/0/hadoopqe/tests/flume/conf/avro-memory-file_roll.properties
GC Warning: Out of Memory!  Returning NIL!
Exception during runtime initialization
GC Warning: Out of Memory!  Returning NIL!
{noformat}
I think the issue here is as we see in the console output that Flume Agent complains that no JAVA_HOME is set so it picks up the java in the default path which is /usr/bin/java.
{noformat}
# /usr/bin/java -version
java version "1.5.0"
gij (GNU libgcj) version 4.4.7 20120313 (Red Hat 4.4.7-4)

Copyright (C) 2007 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
{noformat}
As we see that it uses some 1.5.0 release of Java. We should have used the installed JDK.
{noformat}
# /usr/jdk64/jdk1.7.0_67/bin/java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
{noformat}
Basically we should set JAVA_HOME in $FLUME_CONF_DIR/flume-env.sh. In gsInstaller install we append the following line to flume-env.sh.
{noformat}
export JAVA_HOME=<Installed Java Home>
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)