You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2013/08/20 23:11:53 UTC

[jira] [Commented] (HADOOP-9891) CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFound

    [ https://issues.apache.org/jira/browse/HADOOP-9891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13745432#comment-13745432 ] 

Steve Loughran commented on HADOOP-9891:
----------------------------------------

(this is on a clean linux box, no env variables for Hadoop set up other than JAVA_HOME)

{code}
hadoop-2.1.1-SNAPSHOT$ bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.1.1-SNAPSHOT-tests.jar minicluster -rmport 8096 -jhsport 8097
{code}

the JAR file exists
{code}
ls -l share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.1.1-SNAPSHOT-tests.jar
-rw-rw-r-- 1 stevel stevel 1429647 Aug 20 21:49 share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.1.1-SNAPSHOT-tests.jar

{code}

but the cluster doesn't come out to play
{code}

13/08/20 22:03:22 INFO mapreduce.MiniHadoopClusterManager: Updated 0 configuration settings from command line.
13/08/20 22:03:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: testClusterID
13/08/20 22:03:22 INFO namenode.HostFileManager: read includes:
HostSet(
)
13/08/20 22:03:22 INFO namenode.HostFileManager: read excludes:
HostSet(
)
13/08/20 22:03:22 WARN conf.Configuration: hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping
13/08/20 22:03:22 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
13/08/20 22:03:22 INFO util.GSet: Computing capacity for map BlocksMap
13/08/20 22:03:22 INFO util.GSet: VM type       = 32-bit
13/08/20 22:03:22 INFO util.GSet: 2.0% max memory = 494.9 MB
13/08/20 22:03:22 INFO util.GSet: capacity      = 2^21 = 2097152 entries
13/08/20 22:03:22 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
13/08/20 22:03:22 INFO blockmanagement.BlockManager: defaultReplication         = 1
13/08/20 22:03:22 INFO blockmanagement.BlockManager: maxReplication             = 512
13/08/20 22:03:22 INFO blockmanagement.BlockManager: minReplication             = 1
13/08/20 22:03:22 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
13/08/20 22:03:22 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
13/08/20 22:03:22 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
13/08/20 22:03:22 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
13/08/20 22:03:23 INFO namenode.FSNamesystem: fsOwner             = stevel (auth:SIMPLE)
13/08/20 22:03:23 INFO namenode.FSNamesystem: supergroup          = supergroup
13/08/20 22:03:23 INFO namenode.FSNamesystem: isPermissionEnabled = true
13/08/20 22:03:23 INFO namenode.FSNamesystem: HA Enabled: false
13/08/20 22:03:23 INFO namenode.FSNamesystem: Append Enabled: true
13/08/20 22:03:23 INFO util.GSet: Computing capacity for map INodeMap
13/08/20 22:03:23 INFO util.GSet: VM type       = 32-bit
13/08/20 22:03:23 INFO util.GSet: 1.0% max memory = 494.9 MB
13/08/20 22:03:23 INFO util.GSet: capacity      = 2^20 = 1048576 entries
13/08/20 22:03:23 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 0
13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
13/08/20 22:03:23 INFO util.GSet: Computing capacity for map Namenode Retry Cache
13/08/20 22:03:23 INFO util.GSet: VM type       = 32-bit
13/08/20 22:03:23 INFO util.GSet: 0.029999999329447746% max memory = 494.9 MB
13/08/20 22:03:23 INFO util.GSet: capacity      = 2^15 = 32768 entries
13/08/20 22:03:23 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1 has been successfully formatted.
13/08/20 22:03:23 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2 has been successfully formatted.
13/08/20 22:03:23 INFO namenode.FSImage: Saving image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/current/fsimage.ckpt_0000000000000000000 using no compression
13/08/20 22:03:23 INFO namenode.FSImage: Saving image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage.ckpt_0000000000000000000 using no compression
13/08/20 22:03:23 INFO namenode.FSImage: Image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage.ckpt_0000000000000000000 of size 198 bytes saved in 0 seconds.
13/08/20 22:03:23 INFO namenode.FSImage: Image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/current/fsimage.ckpt_0000000000000000000 of size 198 bytes saved in 0 seconds.
13/08/20 22:03:23 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
13/08/20 22:03:23 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
13/08/20 22:03:23 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
13/08/20 22:03:23 INFO impl.MetricsSystemImpl: NameNode metrics system started
13/08/20 22:03:23 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
13/08/20 22:03:23 INFO http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
13/08/20 22:03:23 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
13/08/20 22:03:23 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
13/08/20 22:03:23 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
13/08/20 22:03:23 INFO http.HttpServer: dfs.webhdfs.enabled = false
13/08/20 22:03:23 INFO http.HttpServer: Jetty bound to port 49811
13/08/20 22:03:23 INFO mortbay.log: jetty-6.1.26
13/08/20 22:03:23 INFO mortbay.log: Started SelectChannelConnector@localhost:49811
13/08/20 22:03:23 INFO namenode.NameNode: Web-server up at: localhost:49811
13/08/20 22:03:23 INFO namenode.HostFileManager: read includes:
HostSet(
)
13/08/20 22:03:23 INFO namenode.HostFileManager: read excludes:
HostSet(
)
13/08/20 22:03:23 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
13/08/20 22:03:23 INFO util.GSet: Computing capacity for map BlocksMap
13/08/20 22:03:23 INFO util.GSet: VM type       = 32-bit
13/08/20 22:03:23 INFO util.GSet: 2.0% max memory = 494.9 MB
13/08/20 22:03:23 INFO util.GSet: capacity      = 2^21 = 2097152 entries
13/08/20 22:03:23 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
13/08/20 22:03:23 INFO blockmanagement.BlockManager: defaultReplication         = 1
13/08/20 22:03:23 INFO blockmanagement.BlockManager: maxReplication             = 512
13/08/20 22:03:23 INFO blockmanagement.BlockManager: minReplication             = 1
13/08/20 22:03:23 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
13/08/20 22:03:23 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
13/08/20 22:03:23 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
13/08/20 22:03:23 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
13/08/20 22:03:23 INFO namenode.FSNamesystem: fsOwner             = stevel (auth:SIMPLE)
13/08/20 22:03:23 INFO namenode.FSNamesystem: supergroup          = supergroup
13/08/20 22:03:23 INFO namenode.FSNamesystem: isPermissionEnabled = true
13/08/20 22:03:23 INFO namenode.FSNamesystem: HA Enabled: false
13/08/20 22:03:23 INFO namenode.FSNamesystem: Append Enabled: true
13/08/20 22:03:23 INFO util.GSet: Computing capacity for map INodeMap
13/08/20 22:03:23 INFO util.GSet: VM type       = 32-bit
13/08/20 22:03:23 INFO util.GSet: 1.0% max memory = 494.9 MB
13/08/20 22:03:23 INFO util.GSet: capacity      = 2^20 = 1048576 entries
13/08/20 22:03:23 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 0
13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
13/08/20 22:03:23 INFO util.GSet: Computing capacity for map Namenode Retry Cache
13/08/20 22:03:23 INFO util.GSet: VM type       = 32-bit
13/08/20 22:03:23 INFO util.GSet: 0.029999999329447746% max memory = 494.9 MB
13/08/20 22:03:23 INFO util.GSet: capacity      = 2^15 = 32768 entries
13/08/20 22:03:23 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/in_use.lock acquired by nodename 13794@ubuntu
13/08/20 22:03:23 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/in_use.lock acquired by nodename 13794@ubuntu
13/08/20 22:03:23 INFO namenode.FileJournalManager: Recovering unfinalized segments in /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current
13/08/20 22:03:23 INFO namenode.FileJournalManager: Recovering unfinalized segments in /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/current
13/08/20 22:03:23 INFO namenode.FSImage: No edit log streams selected.
13/08/20 22:03:23 INFO namenode.FSImage: Loading image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage_0000000000000000000 using no compression
13/08/20 22:03:23 INFO namenode.FSImage: Number of files = 1
13/08/20 22:03:23 INFO namenode.FSImage: Number of files under construction = 0
13/08/20 22:03:23 INFO namenode.FSImage: Image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage_0000000000000000000 of size 198 bytes loaded in 0 seconds.
13/08/20 22:03:23 INFO namenode.FSImage: Loaded image for txid 0 from /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage_0000000000000000000
13/08/20 22:03:23 INFO namenode.FSEditLog: Starting log segment at 1
13/08/20 22:03:23 INFO namenode.NameCache: initialized with 0 entries 0 lookups
13/08/20 22:03:23 INFO namenode.FSNamesystem: Finished loading FSImage in 99 msecs
13/08/20 22:03:23 INFO ipc.Server: Starting Socket Reader #1 for port 58332
13/08/20 22:03:24 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean
13/08/20 22:03:24 INFO namenode.FSNamesystem: Number of blocks under construction: 0
13/08/20 22:03:24 INFO namenode.FSNamesystem: Number of blocks under construction: 0
13/08/20 22:03:24 INFO namenode.FSNamesystem: initializing replication queues
13/08/20 22:03:24 INFO blockmanagement.BlockManager: Total number of blocks            = 0
13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of invalid blocks          = 0
13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 0
13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of  over-replicated blocks = 0
13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of blocks being written    = 0
13/08/20 22:03:24 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 13 msec
13/08/20 22:03:24 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs
13/08/20 22:03:24 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
13/08/20 22:03:24 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
13/08/20 22:03:24 INFO ipc.Server: IPC Server Responder: starting
13/08/20 22:03:24 INFO ipc.Server: IPC Server listener on 58332: starting
13/08/20 22:03:24 INFO namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:58332
13/08/20 22:03:24 INFO namenode.FSNamesystem: Starting services required for active state
13/08/20 22:03:24 INFO hdfs.MiniDFSCluster: Starting DataNode 0 with dfs.datanode.data.dir: file:/home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1,file:/home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2
13/08/20 22:03:24 INFO impl.MetricsSystemImpl: DataNode metrics system started (again)
13/08/20 22:03:24 INFO datanode.DataNode: Configured hostname is 127.0.0.1
13/08/20 22:03:24 INFO datanode.DataNode: Opened streaming server at /127.0.0.1:47429
13/08/20 22:03:24 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s
13/08/20 22:03:24 INFO http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
13/08/20 22:03:24 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
13/08/20 22:03:24 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
13/08/20 22:03:24 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
13/08/20 22:03:24 INFO datanode.DataNode: Opened info server at localhost:0
13/08/20 22:03:24 INFO datanode.DataNode: dfs.webhdfs.enabled = false
13/08/20 22:03:24 INFO http.HttpServer: Jetty bound to port 59754
13/08/20 22:03:24 INFO mortbay.log: jetty-6.1.26
13/08/20 22:03:24 INFO mortbay.log: Started SelectChannelConnector@localhost:59754
13/08/20 22:03:24 INFO datanode.DataNode: Opened IPC server at /127.0.0.1:34353
13/08/20 22:03:24 INFO ipc.Server: Starting Socket Reader #1 for port 34353
13/08/20 22:03:24 INFO datanode.DataNode: Refresh request received for nameservices: null
13/08/20 22:03:24 INFO datanode.DataNode: Starting BPOfferServices for nameservices: <default>
13/08/20 22:03:24 INFO datanode.DataNode: Block pool <registering> (storage id unknown) service to localhost/127.0.0.1:58332 starting to offer service
13/08/20 22:03:24 INFO ipc.Server: IPC Server Responder: starting
13/08/20 22:03:24 INFO ipc.Server: IPC Server listener on 34353: starting
13/08/20 22:03:24 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/in_use.lock acquired by nodename 13794@ubuntu
13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1 is not formatted
13/08/20 22:03:24 INFO common.Storage: Formatting ...
13/08/20 22:03:24 INFO hdfs.MiniDFSCluster: Waiting for cluster to become active
13/08/20 22:03:24 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/in_use.lock acquired by nodename 13794@ubuntu
13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2 is not formatted
13/08/20 22:03:24 INFO common.Storage: Formatting ...
13/08/20 22:03:24 INFO common.Storage: Locking is disabled
13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current/BP-604112716-192.168.1.132-1377032603159 is not formatted.
13/08/20 22:03:24 INFO common.Storage: Formatting ...
13/08/20 22:03:24 INFO common.Storage: Formatting block pool BP-604112716-192.168.1.132-1377032603159 directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current/BP-604112716-192.168.1.132-1377032603159/current
13/08/20 22:03:24 INFO common.Storage: Locking is disabled
13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current/BP-604112716-192.168.1.132-1377032603159 is not formatted.
13/08/20 22:03:24 INFO common.Storage: Formatting ...
13/08/20 22:03:24 INFO common.Storage: Formatting block pool BP-604112716-192.168.1.132-1377032603159 directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current/BP-604112716-192.168.1.132-1377032603159/current
13/08/20 22:03:24 INFO datanode.DataNode: Setting up storage: nsid=355659070;bpid=BP-604112716-192.168.1.132-1377032603159;lv=-47;nsInfo=lv=-47;cid=testClusterID;nsid=355659070;c=0;bpid=BP-604112716-192.168.1.132-1377032603159
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Added volume - /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Added volume - /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Registered FSDatasetState MBean
13/08/20 22:03:24 INFO datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1377035360956 with interval 21600000
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Adding block pool BP-604112716-192.168.1.132-1377032603159
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Scanning block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current...
13/08/20 22:03:24 INFO hdfs.MiniDFSCluster: Waiting for cluster to become active
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Scanning block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current...
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-604112716-192.168.1.132-1377032603159 on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current: 16ms
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-604112716-192.168.1.132-1377032603159 on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current: 22ms
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-604112716-192.168.1.132-1377032603159: 22ms
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current...
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current: 0ms
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current...
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current: 1ms
13/08/20 22:03:24 INFO impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
13/08/20 22:03:24 INFO datanode.DataNode: Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332 beginning handshake with NN
13/08/20 22:03:24 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1, storageID=DS-1166679418-192.168.1.132-47429-1377032604876, infoPort=59754, ipcPort=34353, storageInfo=lv=-47;cid=testClusterID;nsid=355659070;c=0) storage DS-1166679418-192.168.1.132-47429-1377032604876
13/08/20 22:03:25 INFO net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:47429
13/08/20 22:03:25 INFO datanode.DataNode: Block pool Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332 successfully registered with NN
13/08/20 22:03:25 INFO datanode.DataNode: For namenode localhost/127.0.0.1:58332 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
13/08/20 22:03:25 INFO datanode.DataNode: Namenode Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332 trying to claim ACTIVE state with txid=1
13/08/20 22:03:25 INFO datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332
13/08/20 22:03:25 INFO blockmanagement.BlockManager: BLOCK* processReport: Received first block report from 127.0.0.1:47429 after starting up or becoming active. Its block contents are no longer considered stale
13/08/20 22:03:25 INFO BlockStateChange: BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-1166679418-192.168.1.132-47429-1377032604876, infoPort=59754, ipcPort=34353, storageInfo=lv=-47;cid=testClusterID;nsid=355659070;c=0), blocks: 0, processing time: 4 msecs
13/08/20 22:03:25 INFO datanode.DataNode: BlockReport of 0 blocks took 1 msec to generate and 9 msecs for RPC and NN processing
13/08/20 22:03:25 INFO datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@381a53
13/08/20 22:03:25 INFO util.GSet: Computing capacity for map BlockMap
13/08/20 22:03:25 INFO util.GSet: VM type       = 32-bit
13/08/20 22:03:25 INFO util.GSet: 0.5% max memory = 494.9 MB
13/08/20 22:03:25 INFO util.GSet: capacity      = 2^19 = 524288 entries
13/08/20 22:03:25 INFO datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-604112716-192.168.1.132-1377032603159
13/08/20 22:03:25 INFO datanode.DataBlockScanner: Added bpid=BP-604112716-192.168.1.132-1377032603159 to blockPoolScannerMap, new size=1
13/08/20 22:03:25 INFO hdfs.MiniDFSCluster: Cluster is active
13/08/20 22:03:25 INFO mapreduce.MiniHadoopClusterManager: Started MiniDFSCluster -- namenode on port 58332
java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/server/MiniYARNCluster
	at org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:170)
	at org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:129)
	at org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:314)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
	at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
	at org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:115)
	at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:123)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.yarn.server.MiniYARNCluster
	at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
	... 16 more
{code}
                
> CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFound
> -------------------------------------------------------------------
>
>                 Key: HADOOP-9891
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9891
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: documentation
>    Affects Versions: 2.1.1-beta
>            Reporter: Steve Loughran
>            Priority: Minor
>
> The instruction on how to start up a mini CLI cluster in {{CLIMiniCluster.md}} don't work -it looks like {{MiniYarnCluster}} isn't on the classpath

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira