You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Tim Dunphy <bl...@gmail.com> on 2014/11/16 00:48:20 UTC

install bigtop hadoop on a t2.micro instance

Hey all,

 I installed bigtop hadoop on a t2.micro instance over at amazon. And I got
the following result when trying to initialize the namenode:


root@hadoop1:/home/ec2-user] #/etc/init.d/hadoop-hdfs-namenode init
14/11/15 18:42:20 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop1.jokefire.com/172.31.59.97
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.0.5-alpha
STARTUP_MSG:   classpath =
/etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/avro-1.5.3.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/zookeeper-3.4.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.5-alpha.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.5-alpha.jar:/usr/lib/hadoop/.//hadoop-common-2.0.5-alpha-tests.jar:/usr/lib/hadoop/.//hadoop-common-2.0.5-alpha.jar:/usr/lib/hadoop/contrib/capacity-scheduler/*.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.5-alpha-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.5.3.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.8.jar:/usr/lib/hadoop-yarn/lib/junit-4.8.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.5.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.5-alpha-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.0.5-alpha.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/bigtop.git
-r dee8c65d6efb8244d16a3692a558c46744c87c92; compiled by 'jenkins' on
2013-06-09T06:06Z
STARTUP_MSG:   java = 1.6.0_45
************************************************************/
Formatting using clusterid: CID-cb75b607-4341-4332-bb3b-3f2f85e0fd5e
14/11/15 18:42:21 INFO util.HostsFileReader: Refreshing hosts
(include/exclude) list
14/11/15 18:42:21 INFO blockmanagement.DatanodeManager:
dfs.block.invalidate.limit=1000
14/11/15 18:42:21 INFO blockmanagement.BlockManager:
dfs.block.access.token.enable=false
14/11/15 18:42:21 INFO blockmanagement.BlockManager: defaultReplication
    = 1
14/11/15 18:42:21 INFO blockmanagement.BlockManager: maxReplication
    = 512
14/11/15 18:42:21 INFO blockmanagement.BlockManager: minReplication
    = 1
14/11/15 18:42:21 INFO blockmanagement.BlockManager: maxReplicationStreams
     = 2
14/11/15 18:42:21 INFO blockmanagement.BlockManager:
shouldCheckForEnoughRacks  = false
14/11/15 18:42:21 INFO blockmanagement.BlockManager:
replicationRecheckInterval = 3000
14/11/15 18:42:21 INFO blockmanagement.BlockManager: encryptDataTransfer
     = false
14/11/15 18:42:21 INFO namenode.FSNamesystem: fsOwner             = hdfs
(auth:SIMPLE)
14/11/15 18:42:21 INFO namenode.FSNamesystem: supergroup          =
supergroup
14/11/15 18:42:21 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/11/15 18:42:21 INFO namenode.FSNamesystem: HA Enabled: false
14/11/15 18:42:21 INFO namenode.FSNamesystem: Append Enabled: true
14/11/15 18:42:21 INFO namenode.NameNode: Caching file names occuring more
than 10 times
14/11/15 18:42:21 INFO namenode.FSNamesystem:
dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/11/15 18:42:21 INFO namenode.FSNamesystem:
dfs.namenode.safemode.min.datanodes = 0
14/11/15 18:42:21 INFO namenode.FSNamesystem:
dfs.namenode.safemode.extension     = 0
Re-format filesystem in Storage Directory
/var/lib/hadoop-hdfs/cache/hdfs/dfs/name ? (Y or N) y
14/11/15 18:42:26 INFO common.Storage: Storage directory
/var/lib/hadoop-hdfs/cache/hdfs/dfs/name has been successfully formatted.
14/11/15 18:42:26 INFO namenode.FSImage: Saving image file
/var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/fsimage.ckpt_0000000000000000000
using no compression
14/11/15 18:42:26 INFO namenode.FSImage: Image file of size 119 saved in 0
seconds.
14/11/15 18:42:26 INFO namenode.NNStorageRetentionManager: Going to retain
1 images with txid >= 0
14/11/15 18:42:26 INFO util.ExitUtil: Exiting with status 0
14/11/15 18:42:26 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1.jokefire.com/172.31.59.97
************************************************************/

Are there enough resources on a t2.micro to run bigtop hadoop?

I'm wondering what the problem here is.

Thanks
Tim

-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B

Re: install bigtop hadoop on a t2.micro instance

Posted by Olivier Renault <or...@hortonworks.com>.
After formatting the namenode, it will shutdown, you will then need to
start it again. It looks like the init script with the init command is just
formatting the namenode. Do you get an error when starting the namenode ?

Also may I ask why you've installed 2.0.5-alpha ? The latest stable is
2.4.1.
http://hadoop.apache.org/docs/stable2/

Thanks,
Olivier

On 15 November 2014 23:48, Tim Dunphy <bl...@gmail.com> wrote:

> Hey all,
>
>  I installed bigtop hadoop on a t2.micro instance over at amazon. And I
> got the following result when trying to initialize the namenode:
>
>
> root@hadoop1:/home/ec2-user] #/etc/init.d/hadoop-hdfs-namenode init
> 14/11/15 18:42:20 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = hadoop1.jokefire.com/172.31.59.97
> STARTUP_MSG:   args = [-format]
> STARTUP_MSG:   version = 2.0.5-alpha
> STARTUP_MSG:   classpath =
> /etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/avro-1.5.3.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/zookeeper-3.4.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.5-alpha.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.5-alpha.jar:/usr/lib/hadoop/.//hadoop-common-2.0.5-alpha-tests.jar:/usr/lib/hadoop/.//hadoop-common-2.0.5-alpha.jar:/usr/lib/hadoop/contrib/capacity-scheduler/*.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.5-alpha-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.5.3.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.8.jar:/usr/lib/hadoop-yarn/lib/junit-4.8.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.5.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.5-alpha-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.0.5-alpha.jar
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/bigtop.git
> -r dee8c65d6efb8244d16a3692a558c46744c87c92; compiled by 'jenkins' on
> 2013-06-09T06:06Z
> STARTUP_MSG:   java = 1.6.0_45
> ************************************************************/
> Formatting using clusterid: CID-cb75b607-4341-4332-bb3b-3f2f85e0fd5e
> 14/11/15 18:42:21 INFO util.HostsFileReader: Refreshing hosts
> (include/exclude) list
> 14/11/15 18:42:21 INFO blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: defaultReplication
>       = 1
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: maxReplication
>       = 512
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: minReplication
>       = 1
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: maxReplicationStreams
>      = 2
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: encryptDataTransfer
>      = false
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: fsOwner             = hdfs
> (auth:SIMPLE)
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: supergroup          =
> supergroup
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: isPermissionEnabled = true
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: HA Enabled: false
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: Append Enabled: true
> 14/11/15 18:42:21 INFO namenode.NameNode: Caching file names occuring more
> than 10 times
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 0
> Re-format filesystem in Storage Directory
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name ? (Y or N) y
> 14/11/15 18:42:26 INFO common.Storage: Storage directory
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name has been successfully formatted.
> 14/11/15 18:42:26 INFO namenode.FSImage: Saving image file
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/fsimage.ckpt_0000000000000000000
> using no compression
> 14/11/15 18:42:26 INFO namenode.FSImage: Image file of size 119 saved in 0
> seconds.
> 14/11/15 18:42:26 INFO namenode.NNStorageRetentionManager: Going to retain
> 1 images with txid >= 0
> 14/11/15 18:42:26 INFO util.ExitUtil: Exiting with status 0
> 14/11/15 18:42:26 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at hadoop1.jokefire.com/172.31.59.97
> ************************************************************/
>
> Are there enough resources on a t2.micro to run bigtop hadoop?
>
> I'm wondering what the problem here is.
>
> Thanks
> Tim
>
> --
> GPG me!!
>
> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: install bigtop hadoop on a t2.micro instance

Posted by Olivier Renault <or...@hortonworks.com>.
After formatting the namenode, it will shutdown, you will then need to
start it again. It looks like the init script with the init command is just
formatting the namenode. Do you get an error when starting the namenode ?

Also may I ask why you've installed 2.0.5-alpha ? The latest stable is
2.4.1.
http://hadoop.apache.org/docs/stable2/

Thanks,
Olivier

On 15 November 2014 23:48, Tim Dunphy <bl...@gmail.com> wrote:

> Hey all,
>
>  I installed bigtop hadoop on a t2.micro instance over at amazon. And I
> got the following result when trying to initialize the namenode:
>
>
> root@hadoop1:/home/ec2-user] #/etc/init.d/hadoop-hdfs-namenode init
> 14/11/15 18:42:20 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = hadoop1.jokefire.com/172.31.59.97
> STARTUP_MSG:   args = [-format]
> STARTUP_MSG:   version = 2.0.5-alpha
> STARTUP_MSG:   classpath =
> /etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/avro-1.5.3.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/zookeeper-3.4.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.5-alpha.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.5-alpha.jar:/usr/lib/hadoop/.//hadoop-common-2.0.5-alpha-tests.jar:/usr/lib/hadoop/.//hadoop-common-2.0.5-alpha.jar:/usr/lib/hadoop/contrib/capacity-scheduler/*.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.5-alpha-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.5.3.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.8.jar:/usr/lib/hadoop-yarn/lib/junit-4.8.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.5.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.5-alpha-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.0.5-alpha.jar
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/bigtop.git
> -r dee8c65d6efb8244d16a3692a558c46744c87c92; compiled by 'jenkins' on
> 2013-06-09T06:06Z
> STARTUP_MSG:   java = 1.6.0_45
> ************************************************************/
> Formatting using clusterid: CID-cb75b607-4341-4332-bb3b-3f2f85e0fd5e
> 14/11/15 18:42:21 INFO util.HostsFileReader: Refreshing hosts
> (include/exclude) list
> 14/11/15 18:42:21 INFO blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: defaultReplication
>       = 1
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: maxReplication
>       = 512
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: minReplication
>       = 1
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: maxReplicationStreams
>      = 2
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: encryptDataTransfer
>      = false
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: fsOwner             = hdfs
> (auth:SIMPLE)
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: supergroup          =
> supergroup
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: isPermissionEnabled = true
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: HA Enabled: false
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: Append Enabled: true
> 14/11/15 18:42:21 INFO namenode.NameNode: Caching file names occuring more
> than 10 times
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 0
> Re-format filesystem in Storage Directory
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name ? (Y or N) y
> 14/11/15 18:42:26 INFO common.Storage: Storage directory
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name has been successfully formatted.
> 14/11/15 18:42:26 INFO namenode.FSImage: Saving image file
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/fsimage.ckpt_0000000000000000000
> using no compression
> 14/11/15 18:42:26 INFO namenode.FSImage: Image file of size 119 saved in 0
> seconds.
> 14/11/15 18:42:26 INFO namenode.NNStorageRetentionManager: Going to retain
> 1 images with txid >= 0
> 14/11/15 18:42:26 INFO util.ExitUtil: Exiting with status 0
> 14/11/15 18:42:26 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at hadoop1.jokefire.com/172.31.59.97
> ************************************************************/
>
> Are there enough resources on a t2.micro to run bigtop hadoop?
>
> I'm wondering what the problem here is.
>
> Thanks
> Tim
>
> --
> GPG me!!
>
> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: install bigtop hadoop on a t2.micro instance

Posted by Olivier Renault <or...@hortonworks.com>.
After formatting the namenode, it will shutdown, you will then need to
start it again. It looks like the init script with the init command is just
formatting the namenode. Do you get an error when starting the namenode ?

Also may I ask why you've installed 2.0.5-alpha ? The latest stable is
2.4.1.
http://hadoop.apache.org/docs/stable2/

Thanks,
Olivier

On 15 November 2014 23:48, Tim Dunphy <bl...@gmail.com> wrote:

> Hey all,
>
>  I installed bigtop hadoop on a t2.micro instance over at amazon. And I
> got the following result when trying to initialize the namenode:
>
>
> root@hadoop1:/home/ec2-user] #/etc/init.d/hadoop-hdfs-namenode init
> 14/11/15 18:42:20 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = hadoop1.jokefire.com/172.31.59.97
> STARTUP_MSG:   args = [-format]
> STARTUP_MSG:   version = 2.0.5-alpha
> STARTUP_MSG:   classpath =
> /etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/avro-1.5.3.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/zookeeper-3.4.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.5-alpha.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.5-alpha.jar:/usr/lib/hadoop/.//hadoop-common-2.0.5-alpha-tests.jar:/usr/lib/hadoop/.//hadoop-common-2.0.5-alpha.jar:/usr/lib/hadoop/contrib/capacity-scheduler/*.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.5-alpha-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.5.3.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.8.jar:/usr/lib/hadoop-yarn/lib/junit-4.8.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.5.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.5-alpha-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.0.5-alpha.jar
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/bigtop.git
> -r dee8c65d6efb8244d16a3692a558c46744c87c92; compiled by 'jenkins' on
> 2013-06-09T06:06Z
> STARTUP_MSG:   java = 1.6.0_45
> ************************************************************/
> Formatting using clusterid: CID-cb75b607-4341-4332-bb3b-3f2f85e0fd5e
> 14/11/15 18:42:21 INFO util.HostsFileReader: Refreshing hosts
> (include/exclude) list
> 14/11/15 18:42:21 INFO blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: defaultReplication
>       = 1
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: maxReplication
>       = 512
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: minReplication
>       = 1
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: maxReplicationStreams
>      = 2
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: encryptDataTransfer
>      = false
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: fsOwner             = hdfs
> (auth:SIMPLE)
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: supergroup          =
> supergroup
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: isPermissionEnabled = true
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: HA Enabled: false
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: Append Enabled: true
> 14/11/15 18:42:21 INFO namenode.NameNode: Caching file names occuring more
> than 10 times
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 0
> Re-format filesystem in Storage Directory
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name ? (Y or N) y
> 14/11/15 18:42:26 INFO common.Storage: Storage directory
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name has been successfully formatted.
> 14/11/15 18:42:26 INFO namenode.FSImage: Saving image file
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/fsimage.ckpt_0000000000000000000
> using no compression
> 14/11/15 18:42:26 INFO namenode.FSImage: Image file of size 119 saved in 0
> seconds.
> 14/11/15 18:42:26 INFO namenode.NNStorageRetentionManager: Going to retain
> 1 images with txid >= 0
> 14/11/15 18:42:26 INFO util.ExitUtil: Exiting with status 0
> 14/11/15 18:42:26 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at hadoop1.jokefire.com/172.31.59.97
> ************************************************************/
>
> Are there enough resources on a t2.micro to run bigtop hadoop?
>
> I'm wondering what the problem here is.
>
> Thanks
> Tim
>
> --
> GPG me!!
>
> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: install bigtop hadoop on a t2.micro instance

Posted by Olivier Renault <or...@hortonworks.com>.
After formatting the namenode, it will shutdown, you will then need to
start it again. It looks like the init script with the init command is just
formatting the namenode. Do you get an error when starting the namenode ?

Also may I ask why you've installed 2.0.5-alpha ? The latest stable is
2.4.1.
http://hadoop.apache.org/docs/stable2/

Thanks,
Olivier

On 15 November 2014 23:48, Tim Dunphy <bl...@gmail.com> wrote:

> Hey all,
>
>  I installed bigtop hadoop on a t2.micro instance over at amazon. And I
> got the following result when trying to initialize the namenode:
>
>
> root@hadoop1:/home/ec2-user] #/etc/init.d/hadoop-hdfs-namenode init
> 14/11/15 18:42:20 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = hadoop1.jokefire.com/172.31.59.97
> STARTUP_MSG:   args = [-format]
> STARTUP_MSG:   version = 2.0.5-alpha
> STARTUP_MSG:   classpath =
> /etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/avro-1.5.3.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/zookeeper-3.4.2.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.5-alpha.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.5-alpha.jar:/usr/lib/hadoop/.//hadoop-common-2.0.5-alpha-tests.jar:/usr/lib/hadoop/.//hadoop-common-2.0.5-alpha.jar:/usr/lib/hadoop/contrib/capacity-scheduler/*.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.5-alpha-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/avro-1.5.3.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.1.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.8.jar:/usr/lib/hadoop-yarn/lib/junit-4.8.2.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-yarn/lib/paranamer-2.3.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-yarn/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-api-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-client-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-common-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.0.5-alpha.jar:/usr/lib/hadoop-yarn/.//hadoop-yarn-site-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop-mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop-mapreduce/lib/avro-1.5.3.jar:/usr/lib/hadoop-mapreduce/lib/commons-io-2.1.jar:/usr/lib/hadoop-mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop-mapreduce/lib/jersey-core-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-guice-1.8.jar:/usr/lib/hadoop-mapreduce/lib/jersey-server-1.8.jar:/usr/lib/hadoop-mapreduce/lib/junit-4.8.2.jar:/usr/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/lib/netty-3.5.11.Final.jar:/usr/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-mapreduce/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.5-alpha-tests.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.0.5-alpha.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.0.5-alpha.jar
> STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/bigtop.git
> -r dee8c65d6efb8244d16a3692a558c46744c87c92; compiled by 'jenkins' on
> 2013-06-09T06:06Z
> STARTUP_MSG:   java = 1.6.0_45
> ************************************************************/
> Formatting using clusterid: CID-cb75b607-4341-4332-bb3b-3f2f85e0fd5e
> 14/11/15 18:42:21 INFO util.HostsFileReader: Refreshing hosts
> (include/exclude) list
> 14/11/15 18:42:21 INFO blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> dfs.block.access.token.enable=false
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: defaultReplication
>       = 1
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: maxReplication
>       = 512
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: minReplication
>       = 1
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: maxReplicationStreams
>      = 2
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> shouldCheckForEnoughRacks  = false
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager:
> replicationRecheckInterval = 3000
> 14/11/15 18:42:21 INFO blockmanagement.BlockManager: encryptDataTransfer
>      = false
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: fsOwner             = hdfs
> (auth:SIMPLE)
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: supergroup          =
> supergroup
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: isPermissionEnabled = true
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: HA Enabled: false
> 14/11/15 18:42:21 INFO namenode.FSNamesystem: Append Enabled: true
> 14/11/15 18:42:21 INFO namenode.NameNode: Caching file names occuring more
> than 10 times
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct = 0.9990000128746033
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes = 0
> 14/11/15 18:42:21 INFO namenode.FSNamesystem:
> dfs.namenode.safemode.extension     = 0
> Re-format filesystem in Storage Directory
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name ? (Y or N) y
> 14/11/15 18:42:26 INFO common.Storage: Storage directory
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name has been successfully formatted.
> 14/11/15 18:42:26 INFO namenode.FSImage: Saving image file
> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/fsimage.ckpt_0000000000000000000
> using no compression
> 14/11/15 18:42:26 INFO namenode.FSImage: Image file of size 119 saved in 0
> seconds.
> 14/11/15 18:42:26 INFO namenode.NNStorageRetentionManager: Going to retain
> 1 images with txid >= 0
> 14/11/15 18:42:26 INFO util.ExitUtil: Exiting with status 0
> 14/11/15 18:42:26 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at hadoop1.jokefire.com/172.31.59.97
> ************************************************************/
>
> Are there enough resources on a t2.micro to run bigtop hadoop?
>
> I'm wondering what the problem here is.
>
> Thanks
> Tim
>
> --
> GPG me!!
>
> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.